Search is not available for this dataset
project
stringlengths 1
235
| source
stringclasses 16
values | language
stringclasses 48
values | content
stringlengths 909
64.8M
|
---|---|---|---|
github.com/jacobmarshall/pokevision-cli | go | Go | README
[¶](#section-readme)
---
**PokéVision has shutdown (for now). Follow [@PokeVisionGo](https://twitter.com/PokeVisionGo) on Twitter for updates.**
### (Unofficial) PokéVision CLI
This tool allows you to talk directly to the PokéVision API servers. Right now there is only one command - `pokevision watch`, which monitors the coordinates provided for new Pokémon, and logs their location & expiration time.
#### Features
* Periodically checks the PokéVision API for new Pokémon
* Push updates to a Slack channel ([howto](https://github.com/jacobmarshall/pokevision-cli/blob/004cdf4852a4/docs/slack-notifications.md))
* No dependencies, lightweight & simple
* Desktop notifications ([howto](https://github.com/jacobmarshall/pokevision-cli/blob/004cdf4852a4/docs/desktop-notifications.md))
* Multi-language Pokémon names ([howto](https://github.com/jacobmarshall/pokevision-cli/blob/004cdf4852a4/docs/languages.md))
* Filtering in/out specific Pokémon ([howto](https://github.com/jacobmarshall/pokevision-cli/blob/004cdf4852a4/docs/filters.md))
#### Getting Started
Check out the [docs](https://github.com/jacobmarshall/pokevision-cli/blob/004cdf4852a4/docs/readme.md) for [installation instructions](https://github.com/jacobmarshall/pokevision-cli/blob/004cdf4852a4/docs/installation.md) and [basic usage](https://github.com/jacobmarshall/pokevision-cli/blob/004cdf4852a4/docs/basic-usage.md).
#### Screenshots



#### Acknowledgments
* **[PokéVision](https://pokevision.com)** for providing the backend service for this CLI to communicate with. Without them, none of this would be possible.
#### License
```
The MIT License (MIT)
Copyright (c) 2016 <NAME> <https://manage.net.nzPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```
Documentation
[¶](#section-documentation)
---
[Rendered for](https://go.dev/about#build-context)
linux/amd64 windows/amd64 darwin/amd64 js/wasm

There is no documentation for this package. |
lexpredict-lexnlp | readthedoc | Unknown | LexNLP Documentation
Release 2.3.0
ContraxSuite, LLC
Feb 25, 2023
CONTENTS 1.1 About LexNL... 3 1.1.1 Purpos... 3 1.1.2 ContraxSuite Project... 3 1.1.3 Citing for academic us... 4 1.2 LexNLP packag... 4
1.2.1 lexnlp.extract: Extracting structured data from unstructured text . . . . . . . . . . . . . 4
1.2.1.1 Pattern-based extraction methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1.2 NLP-based extraction methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 lexnlp.nlp: Natural language processing . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2.1 Tokenization and related methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.2.2 Segmentation and related methods for real-world text . . . . . . . . . . . . . . . . 7
1.2.2.3 Transforming text into features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Changelo... 7 1.3.1 2.3.0 - November 30, 202... 7 1.3.2 2.2.1.0 - August 10, 202... 7 1.3.3 2.2.0 - July 7, 202... 7 1.3.4 2.1.0 - September 16, 202... 8 1.3.5 2.0.0 - May 10, 202... 8 1.3.6 1.8.0 - December 2, 202... 8 1.3.7 1.7.0 - August 27, 202... 8 1.3.8 1.6.0 - May 27, 202... 8 1.3.9 1.4.0 - December 20, 201... 8 1.3.10 1.3.0 - November 1, 201... 8 1.3.11 0.2.7 - August 1, 201... 9 1.3.12 0.2.6 - Jun 12, 201... 9 1.3.13 0.2.5 - Mar 1, 201... 9 1.3.14 0.2.4 - Feb 1, 201... 9 1.3.15 0.2.3 - Jan 10, 201... 10 1.3.16 0.2.2 - Sep 30, 201... 10 1.3.17 0.2.1 - Aug 24, 201... 10 1.3.18 0.2.0 - Aug 1, 201... 10 1.3.19 0.1.9 - Jul 1, 201... 10 1.3.20 0.1.8 - May 1, 201... 10 1.3.21 0.1.7 - Apr 1, 201... 11 1.3.22 0.1.6 - Mar 1, 201... 11 1.3.23 0.1.5 - Feb 1, 201... 11 1.3.24 0.1.4 - Jan 1, 201... 11 1.3.25 0.1.3 - Dec 1, 201... 11 1.3.26 0.1.2 - Nov 1, 201... 11
i
1.3.27 0.1.1 - Oct 1, 201... 11 1.3.28 0.1.0 - Sep 1, 201... 12 1.4 Licens... 12 1.4.1 AGPL Licens... 12 1.4.2 License Releas... 12 ii
LexNLP Documentation, Release 2.3.0 CONTENTS 1
LexNLP Documentation, Release 2.3.0 2 CONTENTS
CHAPTER
ONE
TABLE OF CONTENTS 1.1 About LexNLP 1.1.1 Purpose LexNLP is a library for working with real, unstructured legal text, including contracts, plans, policies, procedures, and other material. LexNLP provides functionality such as:
Segmentation and tokenization, such as
• A sentence parser that is aware of common legal abbreviations like LLC. or F.3d.
– Pre-trained segmentation models for legal concepts such as pages or sections.
– Pre-trained word embedding and topic models, broadly and for specific practice areas
• Pre-trained classifiers for document type and clause type
• Broad range of fact extraction, such as:
– Monetary amounts, non-monetary amounts, percentages, ratios
– Conditional statements and constraints, like “less than” or “later than”
– Dates, recurring dates, and durations
– Courts, regulations, and citations
• Tools for building new clustering and classification methods
• Hundreds of unit tests from real legal documents 1.1.2 ContraxSuite Projects LexNLP is often used as part of ContraxSuite, an open source contract analytics and document exploration platform built by LexPredict. LexNLP and ContraxSuite are related through the following project structure:
• ContraxSuite web application: https://github.com/LexPredict/lexpredict-contraxsuite
• LexNLP library for extraction: https://github.com/LexPredict/lexpredict-lexnlp
• ContraxSuite pre-trained models and “knowledge sets”: https://github.com/LexPredict/
lexpredict-legal-dictionary
• ContraxSuite agreement samples: https://github.com/LexPredict/lexpredict-contraxsuite-samples
• ContraxSuite deployment automation: https://github.com/LexPredict/lexpredict-contraxsuite-deploy
LexNLP Documentation, Release 2.3.0 1.1.3 Citing for academic use We are currently drafting and submitting a technical whitepaper describing the LexNLP library and documenting its performance on a large corpus of gold-standard contracts. Please contact us at <EMAIL> for more information on citing in academic use.
1.2 LexNLP package 1.2.1 lexnlp.extract: Extracting structured data from unstructured text The lexnlp.extract module contains methods that allow for the extraction of structured data from unstructured textual sources. Supported data types include a wide range of facts relevant to contract or document analysis, including dates, amounts, proper noun types, and conditional statements.
This module is structured along ISO 2-character language codes. Currently, the following languages are stable:
• English: lexnlp.extract.en
• German: lexnlp.extract.de
• Spanish: lexnlp.extract.es Extraction methods follow a simple get_X pattern as demonstrated below:
>>> import lexnlp.extract.en.amounts
>>> text = "There are ten cows in the 2 acre pasture."
>>> print(list(lexnlp.extract.en.amounts.get_amounts(text)))
[10, 2.0]
1.2.1.1 Pattern-based extraction methods The full list of supported pattern-based structured data types is below:
• “EN” locale:
– acts, e.g., “section 1 of the Advancing Hope Act, 1986”
– amounts, e.g., “ten pounds” or “5.8 megawatts”
– citations, e.g., “10 U.S. 100” or “1998 S. Ct. 1”
– companies, e.g., “Lexpredict LLC”
– conditions, e.g., “subject to . . . ” or “unless and until . . . ”
– constraints, e.g., “no more than” or “
– copyright, e.g., “(C) Copyright 2000 Acme”
– courts, e.g., “Supreme Court of New York”
LexNLP Documentation, Release 2.3.0
– CUSIP, e.g., “392690QT3”
– dates, e.g., “June 1, 2017” or “2018-01-01”
– definitions, e.g., “Term shall mean . . . ”
– distances, e.g., “fifteen miles”
– durations, e.g., “ten years” or “thirty days”
– geographic and geopolitical entities, e.g., “New York” or “Norway”
– money and currency usages, e.g., “$5” or “10 Euro”
– percents and rates, e.g., “10%” or “50 bps”
– PII, e.g., “212-212-2121” or “999-999-9999”
– ratios, e.g.,” 3:1” or “four to three”
– regulations, e.g., “32 CFR 170”
– trademarks, e.g., “MyApp (TM)”
– URLs, e.g., “http://acme.com/”
• “DE” locale:
– amounts, e.g., “1 tausend” or “eine halbe Million Dollar”
– citations, e.g., “BGBl. I S. 434”
– copyrights, e.g., “siemens.com globale Website Siemens © 1996 – 2019”
– court citations, e.g., “BStBl I 2003, 240”
– courts, e.g., “Amtsgerichte”
– dates, e.g., “vom 29. März 2017”
– definitions
– durations, e.g., “14. Lebensjahr” or “fünfundzwanzig Jahren”
– geographic and geopolitical entities, e.g., “Albanien”
– percents, e.g., “15 Volumenprozent”
• “ES” locale:
– copyrights, e.g., “”Website BBC Mundo © 1996 – 2019”
– courts, e.g., “Tribunal Superior de Justicia de Madrid”
– dates, e.g., “15 de febrero” or “1ºde enero de 1999”
– definitions, e.g., “”El ser humano”: una anatomía moderna humana”
– regulations, e.g., “Comisión Nacional Bancaria y de Valores”
LexNLP Documentation, Release 2.3.0 1.2.1.2 NLP-based extraction methods In addition to pattern-based structured data types, the lexnlp.extract module also supports NLP methods based on tagged part-of-speech classifiers. These classifiers are based on NLTK and, optionally, Stanford NLP libraries. The list of these modules is below:
• named entity extraction with NLTK maximum entropy classifier
• named entity extraction with NLTK and regular expressions
• named entity extraction with Stanford Named Entity Recognition (NER) models These modules allow to extract data types like:
• addresses, e.g., “1999 Mount Read Blvd, Rochester, NY, USA, 14615”
• companies, e.g., “Lexpredict LLC”
• persons, e.g., “<NAME>”
1.2.2 lexnlp.nlp: Natural language processing The lexnlp.nlp module contains methods that assist in natural language processing (NLP) tasks, especially in the context of developing unsupervised, semi-supervised, or supervised machine learning. Methods range from tokenizing,
stemming, and lemmatizing to the creation of custom sentence segmentation or word embedding models.
This module is structured along ISO 2-character language codes. Currently, the following languages are stable:
• English: lexnlp.nlp.en Extraction methods follow a simple get_X pattern as demonstrated below:
>>> import lexnlp.nlp.en.tokens
>>> text = "There are ten cows in the 2 acre pasture."
>>> print(list(lexnlp.nlp.en.tokens.get_nouns(text)))
['cows', 'pasture']
The methods in this package are primarily built on the Natural Language Toolkit (NLTK), but some functionality from the Stanford NLP, gensim, and spaCy packages is available to users depending on their use case.
Attention: The sections below are a work in progress. Thank you for your patience while we continue to expand
and improve our documentation coverage.
If you have any questions in the meantime, please feel free to log issues on GitHub at the URL below or contact us
at the email below:
• GitHub issues: https://github.com/LexPredict/lexpredict-lexnlp
• Email: <EMAIL>
LexNLP Documentation, Release 2.3.0 1.2.2.1 Tokenization and related methods
• Extracting tokens, stems, lemmas, and parts of speech 1.2.2.2 Segmentation and related methods for real-world text
• Sentences
• Paragraphs
• Sections
• Pages
• Titles
• Utilities 1.2.2.3 Transforming text into features
• Character Transforms
• Token Transforms, including n-grams and skip-grams 1.3 Changelog 1.3.1 2.3.0 - November 30, 2022
• Updated Python version and upgraded all dependencies.
• Started using pipenv 1.3.2 2.2.1.0 - August 10, 2022
• Improved LexNLP handling for companies for the “EN” locale.
• Unified API pattern for sentence and paragraph segmentation.
1.3.3 2.2.0 - July 7, 2022
• Improved LexNLP handling for dates, durations and persons for the all locales.
• Added parameterizable contract classifiers.
• Improved LexNLP handling for ML models.
• Updated python requirements and tests, retrained ML models to use gensim-4.
LexNLP Documentation, Release 2.3.0 1.3.4 2.1.0 - September 16, 2021
• Improved LexNLP handling for companies for the “EN” locale.
• Improved LexNLP handling for dates for all locales and dates parser accuracy for the “DE” locale.
1.3.5 2.0.0 - May 10, 2021
• Tune extracting facts from text for different locales.
• Updated regex patterns and tests for “DE” amount parser.
• Added support for delimiter inference in “EN” and “DE” amount parsers.
1.3.6 1.8.0 - December 2, 2020
• Improved LexNLP handling for definitions for the “EN” locale.
• Implemented rating OCR quality in texts.
• Migrated numeric data in parsers results to decimal format to avoid losing fraction digits.
1.3.7 1.7.0 - August 27, 2020
• Improved LexNLP handling for dates for the “EN” locale.
• Implemented lists of exceptions for entity extractors.
• Implemented strongly typed response for entity extractors.
• Updated third-party python requirements.
1.3.8 1.6.0 - May 27, 2020
• Update psutil package version from 5.4.0 to 5.6.6.
1.3.9 1.4.0 - December 20, 2019
• Improved accuracy of locating and converting date phrases into typed format.
• Introduced new text vectorizing and classifying models.
• Implemented ML-based definitions locator.
1.3.10 1.3.0 - November 1, 2019
• Made massive improvements to EN definitions and companies parsers.
• Updated EN dates parser to catch more date formats.
• Made company parsing strongly typed
LexNLP Documentation, Release 2.3.0 1.3.11 0.2.7 - August 1, 2019
• Standardized LexNLP methods response to return a generator of Annotation objects or a generator of dictionaries
(tuples)
• Improved LexNLP handling for definitions for the “EN” locale.
• Improved LexNLP handling for companies for the “EN” locale.
• Improved sentence splitting logic.
• Improved LexNLP unit test coverage.
• Updated python requirements in python-requirements*.txt.
• Dropped support for python 3.4 and 3.5.
1.3.12 0.2.6 - Jun 12, 2019
• Improved LexNLP handling for dates for all locales.
• Improved LexNLP handling for currencies for “EN” locale.
• Updated documentation for ReadTheDocs.
• Improved LexNLP unit test coverage.
1.3.13 0.2.5 - Mar 1, 2019
• Improved LexNLP handling for courts for “DE” and “ES” locales.
• Improved LexNLP handling for dates for “ES” locale.
• Improved LexNLP handling for amounts, acts, regulations and definitions for “EN” locale.
• Added CUSIP parser for “EN” locale.
• Improved LexNLP unit test coverage.
1.3.14 0.2.4 - Feb 1, 2019
• Added universal courts parser, configured LexNLP handling for courts for “DE” locale.
• Added universal dates parser, configured LexNLP handling for dates for “DE” and “ES” locales.
• Added definitions, citations and dates parsers for “DE” locale.
• Added amounts, percents and durations parsers for “DE” locale.
• Added geo entities parser for “DE” locale.
• Added courts and definitions parsers for “ES” locale.
• Added acts parser for “EN” locale.
• Improved LexNLP unit test coverage.
LexNLP Documentation, Release 2.3.0 1.3.15 0.2.3 - Jan 10, 2019
• Updated python requirements.
• Improved LexNLP handling for definitions and paragraphs.
• Improved LexNLP unit test coverage.
1.3.16 0.2.2 - Sep 30, 2018
• Improved LexNLP handling for different date formats.
• Improved LexNLP handling for titles.
• Improved LexNLP unit test coverage.
1.3.17 0.2.1 - Aug 24, 2018
• Updated python requirements.
• Improved LexNLP handling for amounts.
• Optimized processing of sentences and titles.
• Improved LexNLP unit test coverage.
1.3.18 0.2.0 - Aug 1, 2018
• Improved LexNLP handling for addresses and sentences.
• Improved LexNLP unit test coverage.
1.3.19 0.1.9 - Jul 1, 2018
• Improved handling of TOC during sentence processing.
• Added contracts locator to LexNLP.
• Improved LexNLP handling for citations, titles and definitions.
• Improved LexNLP unit test coverage.
1.3.20 0.1.8 - May 1, 2018
• Improved LexNLP handling for addresses and currencies.
• Improved LexNLP unit test coverage.
LexNLP Documentation, Release 2.3.0 1.3.21 0.1.7 - Apr 1, 2018
• Improved LexNLP handling for companies, organizations and dates.
• Implemented generating train/test dataset for addresses.
• Exclude common false positives for persons parser.
1.3.22 0.1.6 - Mar 1, 2018
• Improved LexNLP unit test coverage.
1.3.23 0.1.5 - Feb 1, 2018
• Improved LexNLP unit test coverage.
1.3.24 0.1.4 - Jan 1, 2018
• Improved LexNLP unit test coverage.
• Implemented method to get sentence ranges in addition to sentence texts.
1.3.25 0.1.3 - Dec 1, 2017
• Improved LexNLP unit test coverage.
1.3.26 0.1.2 - Nov 1, 2017
• Implemented LexNLP title locator.
• Implemented additional LexNLP transforms for skipgrams and n-grams.
• Improved LexNLP handling for parties with abbreviations and other cases.
• Improved LexNLP handling for amounts with mixed alpha and numeric characters.
• Improved LexNLP unit test coverage.
1.3.27 0.1.1 - Oct 1, 2017
• Improve unit test framework handling for language and locales.
• Implemented method and input-level CPU and memory benchmarking for unit tests.
• Migrated all unit tests to 60 separate CSV files.
• Added over 1,000 new unit tests for most LexNLP methods.
• Reduced memory usage for paragraph and section segmenters.
• Improved handling of brackets and parentheses within noun phrases.
• Added URL locator to LexNLP.
• Added trademark locator to LexNLP.
LexNLP Documentation, Release 2.3.0
• Added copyright locator to LexNLP.
• Improved default Punkt sentence boundary detection.
• Added custom sentence boundary training methods.
• Improved handling of multilingual text, especially around geopolitical entities.
• Improved default handling of party names with non-standard characters.
• Enhanced metadata related to party type in LexNLP.
• Improved continuous integration for public repositories.
1.3.28 0.1.0 - Sep 1, 2017
• Refactored and integrate core extraction into separate LexNLP package.
• Released nearly 200 unit tests with over 500 real-world test cases in LexNLP.
• Improved definition, date, and financial amount locators for corner cases.
• Integrated PII locator for phone numbers, SSNs, and names from LexNLP.
• Integrated ratio locator from LexNLP.
• Integrated percent locator from LexNLP.
• Integrated regulatory locator from LexNLP.
• Integrated distance locator from LexNLP.
• Integrated case citation locator from LexNLP.
• Improved geopolitical locator to allow non-master-data entity location.
• Improved party locator to allow configuration and better handle corner cases 1.4 License 1.4.1 AGPL License LexNLP is available by default under the terms of the GNU Affero General Public License v3.0. https://github.com/
LexPredict/lexpredict-lexnlp/blob/2.3.0/LICENSE 1.4.2 License Release If you would like to request a release from the terms of the default AGPLv3 license, please contact us at: ContraxSuite Licensing <<EMAIL>>.
CHAPTER
TWO
INDICES AND TABLES
• genindex
• modindex
• search
13 |
pymonetdb | readthedoc | Python | pymonetdb 1.7.1 documentation
[pymonetdb](index.html#document-index)
---
The MonetDB Python API[¶](#the-monetdb-python-api)
===
pymonetdb is the native Python client API for monetDB. It is cross-platform and does not depend on any MonetDB libraries. It supports Python 3.6+ and PyPy and is Python [DBAPI 2.0](https://peps.python.org/pep-0249/) compatible.
Besides the functionality required by DBAPI 2.0, pymonetdb also provides some MonetDB-specific functionality, in particular file transfers. These are detailed in the API section.
Contents[¶](#contents)
---
### Getting Started[¶](#getting-started)
#### Installation[¶](#installation)
pymonetdb is available on PyPI and can be installed with the following command:
```
$ pip install pymonetdb
```
It can also be installed from its source directory by running:
```
$ python setup.py install
```
#### Connecting[¶](#connecting)
In its simplest form, the function [`pymonetdb.connect()`](index.html#pymonetdb.connect) takes a single parameter, the database name:
```
conn = pymonetdb.connect('demo')
```
Usually, you have to pass more:
```
conn = pymonetdb.connect(
'demo',
hostname='dbhost', port=50001,
username='yours', password='truly')
```
There are also some options you can set, for example `autocommit=True`.
It is also possible to combine everything in a URL:
```
url = 'mapi:monetdb://yours:truly@dbhost:50001/demo?autocommit=true'
conn = pymonetdb.connect(url)
```
For more details see the documentation of [`pymonetdb.connect()`](index.html#pymonetdb.connect).
### Examples[¶](#examples)
Here are some examples of how to use pymonetdb.
#### Example session[¶](#example-session)
```
> # import the SQL module
> import pymonetdb
>
> # set up a connection. arguments below are the defaults
> connection = pymonetdb.connect(username="monetdb", password="monetdb",
> hostname="localhost", database="demo")
>
> # create a cursor
> cursor = connection.cursor()
>
> # increase the rows fetched to increase performance (optional)
> cursor.arraysize = 100
>
> # execute a query (return the number of rows to fetch)
> cursor.execute('SELECT * FROM tables')
26
>
> # fetch only one row
> cursor.fetchone()
[1062, 'schemas', 1061, None, 0, True, 0, 0]
>
> # fetch the remaining rows
> cursor.fetchall()
[[1067, 'types', 1061, None, 0, True, 0, 0],
[1076, 'functions', 1061, None, 0, True, 0, 0],
[1085, 'args', 1061, None, 0, True, 0, 0],
[1093, 'sequences', 1061, None, 0, True, 0, 0],
[1103, 'dependencies', 1061, None, 0, True, 0, 0],
[1107, 'connections', 1061, None, 0, True, 0, 0],
[1116, '_tables', 1061, None, 0, True, 0, 0],
...
[4141, 'user_role', 1061, None, 0, True, 0, 0],
[4144, 'auths', 1061, None, 0, True, 0, 0],
[4148, 'privileges', 1061, None, 0, True, 0, 0]]
>
> # Show the table meta data
> cursor.description
[('id', 'int', 4, 4, None, None, None),
('name', 'varchar', 12, 12, None, None, None),
('schema_id', 'int', 4, 4, None, None, None),
('query', 'varchar', 168, 168, None, None, None),
('type', 'smallint', 1, 1, None, None, None),
('system', 'boolean', 5, 5, None, None, None),
('commit_action', 'smallint', 1, 1, None, None, None),
('temporary', 'tinyint', 1, 1, None, None, None)]
```
#### MAPI Connection[¶](#mapi-connection)
If you would like to communicate with the database at a lower level you can use the MAPI library (but not recommended):
```
> from pymonetdb import mapi
> server = mapi.Connection()
> server.connect(hostname="localhost", port=50000, username="monetdb",
password="monetdb", database="demo", language="sql")
> server.cmd("sSELECT * FROM tables;")
...
```
#### CSV Upload[¶](#csv-upload)
This is an example script that uploads some CSV data from the local file system:
```
#!/usr/bin/env python3
import os import pymonetdb
# Create the data directory and the CSV file try:
os.mkdir("datadir")
except FileExistsError:
pass with open("datadir/data.csv", "w") as f:
for i in range(10):
print(f"{i},item{i + 1}", file=f)
# Connect to MonetDB and register the upload handler conn = pymonetdb.connect('demo')
handler = pymonetdb.SafeDirectoryHandler("datadir")
conn.set_uploader(handler)
cursor = conn.cursor()
# Set up the table cursor.execute("DROP TABLE foo")
cursor.execute("CREATE TABLE foo(i INT, t TEXT)")
# Upload the data, this will ask the handler to upload data.csv cursor.execute("COPY INTO foo FROM 'data.csv' ON CLIENT USING DELIMITERS ','")
# Check that it has loaded cursor.execute("SELECT t FROM foo WHERE i = 9")
row = cursor.fetchone()
assert row[0] == 'item10'
# Goodbye conn.commit()
cursor.close()
conn.close()
```
### File Transfers[¶](#file-transfers)
MonetDB supports the non-standard `COPY INTO` statement to load a CSV-like text file into a table or to dump a table into a text file. This statement has an optional modifier `ON CLIENT` to indicate that the server should not try to open the file on the server side but instead ask the client to open it on its behalf.
For example:
```
COPY INTO mytable FROM 'data.csv' ON CLIENT USING DELIMITERS ',', E'\n', '"';
```
However, by default, if pymonetdb receives a file request from the server, it will refuse it for security considerations. You do not want an unauthorised party pretending to be the server to be able to request arbitrary files on your system and even overwrite them.
To enable file transfers, create a pymonetdb.Uploader or pymonetdb.Downloader and register them with your connection:
```
transfer_handler = pymonetdb.SafeDirectoryHandler(datadir)
conn.set_uploader(transfer_handler)
conn.set_downloader(transfer_handler)
```
With this in place, the `COPY INTO ... ON CLIENT` statement above will cause pymonetdb to open the file data.csv in the given datadir and upload its contents. As its name suggests, `SafeDirectoryHandler` will only allow access to the files in that directory.
Note that in this example, we register the same handler object as an uploader and a downloader for demonstration purposes. In the real world, it is good security practice only to register an uploader or a downloader It is also possible to use two separate handlers.
See the API documentation for details.
#### Make up data as you go[¶](#make-up-data-as-you-go)
You can also write your own transfer handlers. And instead of opening a file,
such handlers can make up the data on the fly, for instance, retrieve it from a remote microservice, prompt the user interactively or do whatever else you come up with:
```
class MyUploader(pymonetdb.Uploader):
def handle_upload(self, upload, filename, text_mode, skip_amount):
tw = upload.text_writer()
for i in range(skip_amount, 1000):
print(f'{i},number{i}', file=tw)
```
In this example, we call upload.text_writer() to yield a text-mode file-like object. There is also an upload.binary_writer(), which creates a binary-mode file-like object. The binary_writer() works even if the server requests a text mode object, but in that case, you have to make sure the bytes you write are valid UTF-8 and delimited with Unix line endings rather than Windows line endings.
If you want to refuse an upload or download, call upload.send_error() to send an error message *before* any call to text_writer() or binary_writer().
For custom downloaders, the situation is similar, except that instead of text_writer and binary_writer, the download parameter offers download.text_reader() and download.text_writer().
#### Skip amount[¶](#skip-amount)
MonetDB’s `COPY INTO` statement allows you to skip, for example, the first line in a file using the modifier `OFFSET 2`. In such a case,
the skip_amount parameter to handle_upload() will be greater than zero.
Note that the offset in the SQL statement is 1-based, whereas the skip_amount parameter has already been converted to 0-based. The example above thus allows us to write `for i in range(skip_amount, 1000):` rather than `for i in range(1000):`.
#### Cancellation[¶](#cancellation)
In cases depicted by the following query, the server does not need to receive all data of the input file:
> COPY 100 RECORDS INTO mytable FROM ‘data.csv’ ON CLIENT
Therefore, pymonetdb regularly asks the server if it is still interested in receiving more data. In this way, the server can cancel the uploading after it has received sufficient data to process the query. By default, pymonetdb does this after every MiB of data, but you can change this frequency using upload.set_chunk_size().
If the server answers that it is no longer interested, pymonetdb will discard any further data written to the writer. It is recommended to call upload.is_cancelled() occasionally to check for this and exit early if the upload has been cancelled.
Upload handlers also have an optional method cancel() that you can override.
This method is called when pymonetdb receives the cancellation request.
#### Copying data from or to a file-like object[¶](#copying-data-from-or-to-a-file-like-object)
If you are moving large amounts of data between pymonetdb and a file-like object such as a file, Python’s [copyfileobj](https://docs.python.org/3/library/shutil.html#shutil.copyfileobj) function may come in handy:
```
class MyUploader(pymonetdb.Uploader):
def __init__(self, dir):
self.dir = pathlib.Path(dir)
def handle_upload(self, upload, filename, text_mode, skip_amount):
# security check
path = self.dir.joinpath(filename).resolve()
if not str(path).startswith(str(self.dir.resolve())):
return upload.send_error('Forbidden')
# open
tw = upload.text_writer()
with open(path) as f:
# skip
for i in range(skip_amount):
f.readline()
# bulk upload
shutil.copyfileobj(f, tw)
```
However, note that [copyfileobj](https://docs.python.org/3/library/shutil.html#shutil.copyfileobj) does not handle cancellations as described above.
#### Security considerations[¶](#security-considerations)
If your handler accesses the file system or the network, it is critical to validate the file name you are given carefully. Otherwise, an attacker can take over the server or the connection to the server and cause great damage.
The code sample above also includes an example of validating file systems paths.
Similar considerations apply to text inserted into network URLs and other resource identifiers.
### Result set batch size[¶](#result-set-batch-size)
When a query produces a large result set, pymonetdb will often only retrieve part of the result set, retrieving the rest later, one batch at a time.
The default behavior is to start with a reasonably small batch size but increase it rapidly. However, if necessary, the application can configure this behavior. In the table below, you can see the settings controlling the behavior of large transfers.
| Setting name | Defined by | Range | Default |
| --- | --- | --- | --- |
| replysize | pymonetdb | positive integer or -1 [*] | 100 |
| maxprefetch | pymonetdb | positive integer or -1 [*] | 2500 |
| arraysize | DBAPI 2.0 | positive integer | Connection.replysize |
[*] The value -1 means unlimited.
The replysize and maxprefetch settings can be set as attributes of both Connection and Cursor. They can also be passed as parameters in the connection URL. The arraysize setting only exists for Cursor. It defaults to the replysize of the connection when the cursor was created if that is positive, or 100 otherwise.
#### Batching behavior[¶](#batching-behavior)
When MonetDB has finished executing a query, the server includes the first rows of the result set in its response to Cursor.execute(). The exact number of rows it includes can be configured using the replysize setting.
How the rest of the rows are retrieved depends on how they are accessed.
Cursor.fetchone() and Cursor.fetchmany() retrieve the remaining rows in batches of increasing size. Every batch is twice as large as the previous one until the prefetch limit maxprefetch has been reached. This setting controls the maximum number of fetched rows that are not immediately used.
For instance, with replysize = 100, the first 100 fetchone() calls immediately return the next row from the cache. For the 101-st fetchone(),
pymonetdb will first double the replysize and retrieve rows 101-300 before returning row 101. When Cursor.fetchmany() is used, pymonetdb also adjusts the replysize to the requested stride. For example, for fetchmany(40), the first two calls will return rows from the cache. However, for the third call,
pymonetdb will first retrieve rows 101-320, i.e. double the replysize and enlarge it to reach a multiple of 40, before returning rows 81 - 120.
With Cursor.fetchall(), all rows are retrieved at once.
#### New result set format[¶](#new-result-set-format)
Version Jun2023 of MonetDB introduces a new,
binary result set format that is much more efficient to parse. The initial transfer of replysize rows still uses the existing text-based format;
however, the subsequent batches can be transferred much more efficiently with the binary format. By default, pymonetdb will automatically use it when possible unless configured otherwise using the binary setting, e.g.
pymonetdb.connect(‘demo’, binary=0) or pymonetdb.connect(‘mapi:monetdb://localhost/demo?binary=0’).
Normally, the binary result set transfer is transparent to the user applications. The result set fetching functions automatically do the necessary data conversion. However, if you want to know explicitly if the binary format has been used, you can use Cursor.used_binary_protocol(), e.g. after having called a fetch function.
We have implemented a special case to benefit from the binary protocol even when the replysize is set to -1. When pymonetdb knows that binary transfers are possible (e.g. learnt when connecting with MoentDB) while replysize is
-1, it overrides the replysize. Pymonetdb will use a small size for the initial transfer and then retrieve the rest of the result set in one large binary batch.
#### Tweaking the behavior[¶](#tweaking-the-behavior)
Usually, the batching behavior does not need to be tweaked.
When deciding which function to use to fetch the result sets,
Cursor.fetchmany() seems to be a few percent more efficient than Cursor.fetchall(), while Cursor.fetchone() tends to be 10-15% slower.
To reduce the amount of prefetched data, set maxprefetch to a lower value or even 0. The value 0 disables prefetch entirely, only fetching the requested rows. Setting maxprefetch to -1 has the opposite effect: it allows the prefetch size to increase without a bound.
If you expect the size of the individual rows to be huge, consider setting both replysize and maxprefetch to small values, for example, 10 and 20,
respectively, or even 1 and 0. These small batch sizes limit the memory each batch consumes. As a quick rule of thumb for the memory requirements, one can assume that pymonetdb may need up to three times the size of the result set.
Also, remember that if MonetDB is running on the same host, the server will also need at least that amount of memory.
Generally, one does not need to make replysize larger than the default because it will grow rapidly. Furthermore, with the newer versions of MonetDB and pymonetdb, it is better to keep the size of the initial response small to transfer more data in the binary format.
#### Arraysize[¶](#arraysize)
The batching behavior of pymonetdb is governed mainly by replysize and maxprefetch, but the Python DBAPI also specifies the setting [arraysize](https://peps.python.org/pep-0249/#arraysize).
The relationship between these three is as follows:
1. The replysize and maxprefetch settings are specific to pymonetdb,
while arraysize comes from the Python DBAPI.
2. The DBAPI only uses arraysize as the default value for fetchmany() and says that it may influence the efficiency of fetchall(). It does not mention arraysize anywhere else.
3. In pymonetdb, the batching behavior is only influenced by arraysize if fetchmany() is used without an explicit size because then arraysize is used as the default size, and fetchmany() tries to round the batches to this size. It has no effect on fetchall() because that always fetches everything at once.
4. The DBAPI says that the default value for the arraysize of a newly created cursor is 1. Pymonetdb deviates from that, similar to, for example,
[python-oracledb](https://python-oracledb.readthedocs.io/en/latest/api_manual/cursor.html#Cursor.arraysize). Pymonetdb uses the replysize of the connection instead.
If replysize is not a positive integer, the default is 100.
In general, all this means that arraysize needs no tweaking.
### API[¶](#api)
#### Basic SQL usage[¶](#basic-sql-usage)
`pymonetdb.``connect`(**args*, ***kwargs*) → pymonetdb.sql.connections.Connection[¶](#pymonetdb.connect)
Set up a connection to a MonetDB SQL database.
database (str)
name of the database, or MAPI URI (see below)
hostname (str)
Hostname where MonetDB is running port (int)
port to connect to (default: 50000)
username (str)
username for connection (default: “monetdb”)
password (str)
password for connection (default: “monetdb”)
unix_socket (str)
socket to connect to. used when hostname not set (default: “/tmp/.s.monetdb.50000”)
autocommit (bool)
enable/disable auto commit (default: false)
connect_timeout (int)
the socket timeout while connecting binary (int)
enable binary result sets when possible if > 0 (default: 1)
replysize(int)
number of rows to retrieve immediately after query execution (default: 100, -1 means everything)
maxprefetch(int)
max. number of rows to prefetch during Cursor.fetchone() or Cursor.fetchmany()
**MAPI URI Syntax**:
tcp socket mapi:monetdb://[<username>[:<password>]@]<host>[:<port>]/<database>
unix domain socket mapi:monetdb:///[<username>[:<password>]@]path/to/socket?database=<database*class* `pymonetdb.sql.connections.``Connection`(*database*, *hostname=None*, *port=50000*, *username='monetdb'*, *password='monetdb'*, *unix_socket=None*, *autocommit=False*, *host=None*, *user=None*, *connect_timeout=-1*, *binary=1*, *replysize=None*, *maxprefetch=None*)[¶](#pymonetdb.sql.connections.Connection)
Bases: `object`
A MonetDB SQL database connection
*exception* `DataError`[¶](#pymonetdb.sql.connections.Connection.DataError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised for errors that are due to problems with the processed data like division by zero, numeric value out of range, etc. It must be a subclass of DatabaseError.
*exception* `DatabaseError`[¶](#pymonetdb.sql.connections.Connection.DatabaseError)
Bases: [`pymonetdb.exceptions.Error`](#pymonetdb.exceptions.Error)
Exception raised for errors that are related to the database. It must be a subclass of Error.
*exception* `Error`[¶](#pymonetdb.sql.connections.Connection.Error)
Bases: `Exception`
Exception that is the base class of all other error exceptions. You can use this to catch all errors with one single ‘except’ statement. Warnings are not considered errors and thus should not use this class as base. It must be a subclass of the Python StandardError (defined in the module exceptions).
*exception* `IntegrityError`[¶](#pymonetdb.sql.connections.Connection.IntegrityError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised when the relational integrity of the database is affected, e.g. a foreign key check fails. It must be a subclass of DatabaseError.
*exception* `InterfaceError`[¶](#pymonetdb.sql.connections.Connection.InterfaceError)
Bases: [`pymonetdb.exceptions.Error`](#pymonetdb.exceptions.Error)
Exception raised for errors that are related to the database interface rather than the database itself. It must be a subclass of Error.
*exception* `InternalError`[¶](#pymonetdb.sql.connections.Connection.InternalError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised when the database encounters an internal error, e.g. the cursor is not valid anymore, the transaction is out of sync, etc. It must be a subclass of DatabaseError.
*exception* `NotSupportedError`[¶](#pymonetdb.sql.connections.Connection.NotSupportedError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised in case a method or database API was used which is not supported by the database, e.g. requesting a .rollback() on a connection that does not support transaction or has transactions turned off. It must be a subclass of DatabaseError.
*exception* `OperationalError`[¶](#pymonetdb.sql.connections.Connection.OperationalError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised for errors that are related to the database’s operation and not necessarily under the control of the programmer, e.g. an unexpected disconnect occurs,
the data source name is not found, a transaction could not be processed, a memory allocation error occurred during processing, etc. It must be a subclass of DatabaseError.
*exception* `ProgrammingError`[¶](#pymonetdb.sql.connections.Connection.ProgrammingError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised for programming errors, e.g. table not found or already exists, syntax error in the SQL statement, wrong number of parameters specified, etc. It must be a subclass of DatabaseError.
*exception* `Warning`[¶](#pymonetdb.sql.connections.Connection.Warning)
Bases: `Exception`
Exception raised for important warnings like data truncations while inserting, etc. It must be a subclass of the Python StandardError (defined in the module exceptions).
`binary`[¶](#pymonetdb.sql.connections.Connection.binary)
`binary_command`(*command*)[¶](#pymonetdb.sql.connections.Connection.binary_command)
use this function to send low level mapi commands that return raw bytes
`close`()[¶](#pymonetdb.sql.connections.Connection.close)
Close the connection.
The connection will be unusable from this point forward; an Error exception will be raised if any operation is attempted with the connection. The same applies to all cursor objects trying to use the connection. Note that closing a connection without committing the changes first will cause an implicit rollback to be performed.
`command`(*command*)[¶](#pymonetdb.sql.connections.Connection.command)
use this function to send low level mapi commands
`commit`()[¶](#pymonetdb.sql.connections.Connection.commit)
Commit any pending transaction to the database. Note that if the database supports an auto-commit feature, this must be initially off. An interface method may be provided to turn it back on.
Database modules that do not support transactions should implement this method with void functionality.
`cursor`()[¶](#pymonetdb.sql.connections.Connection.cursor)
Return a new Cursor Object using the connection. If the database does not provide a direct cursor concept, the module will have to emulate cursors using other means to the extent needed by this specification.
`default_cursor`[¶](#pymonetdb.sql.connections.Connection.default_cursor)
alias of [`pymonetdb.sql.cursors.Cursor`](#pymonetdb.sql.cursors.Cursor)
`execute`(*query*)[¶](#pymonetdb.sql.connections.Connection.execute)
use this for executing SQL queries
`get_binary`() → int[¶](#pymonetdb.sql.connections.Connection.get_binary)
`get_maxprefetch`() → int[¶](#pymonetdb.sql.connections.Connection.get_maxprefetch)
`get_replysize`() → int[¶](#pymonetdb.sql.connections.Connection.get_replysize)
`gettimeout`()[¶](#pymonetdb.sql.connections.Connection.gettimeout)
get the amount of time before a connection times out
`maxprefetch`[¶](#pymonetdb.sql.connections.Connection.maxprefetch)
`replysize`[¶](#pymonetdb.sql.connections.Connection.replysize)
`rollback`()[¶](#pymonetdb.sql.connections.Connection.rollback)
This method is optional since not all databases provide transaction support.
In case a database does provide transactions this method causes the database to roll back to the start of any pending transaction. Closing a connection without committing the changes first will cause an implicit rollback to be performed.
`set_autocommit`(*autocommit*)[¶](#pymonetdb.sql.connections.Connection.set_autocommit)
Set auto commit on or off. ‘autocommit’ must be a boolean
`set_binary`(*binary: int*)[¶](#pymonetdb.sql.connections.Connection.set_binary)
`set_downloader`(*downloader*)[¶](#pymonetdb.sql.connections.Connection.set_downloader)
Register a Downloader object which will handle file download requests.
Must be an instance of class pymonetdb.Downloader or None
`set_maxprefetch`(*maxprefetch: int*)[¶](#pymonetdb.sql.connections.Connection.set_maxprefetch)
`set_replysize`(*replysize: int*)[¶](#pymonetdb.sql.connections.Connection.set_replysize)
`set_sizeheader`(*sizeheader*)[¶](#pymonetdb.sql.connections.Connection.set_sizeheader)
Set sizeheader on or off. When enabled monetdb will return the size a type. ‘sizeheader’ must be a boolean.
`set_timezone`(*seconds_east_of_utc*)[¶](#pymonetdb.sql.connections.Connection.set_timezone)
`set_uploader`(*uploader*)[¶](#pymonetdb.sql.connections.Connection.set_uploader)
Register an Uploader object which will handle file upload requests.
Must be an instance of class pymonetdb.Uploader or None.
`settimeout`(*timeout*)[¶](#pymonetdb.sql.connections.Connection.settimeout)
set the amount of time before a connection times out
*class* `pymonetdb.sql.cursors.``Cursor`(*connection: pymonetdb.sql.connections.Connection*)[¶](#pymonetdb.sql.cursors.Cursor)
Bases: `object`
This object represents a database cursor, which is used to manage the context of a fetch operation. Cursors created from the same connection are not isolated, i.e., any changes done to the database by a cursor are immediately visible by the other cursors
`arraysize` *= None*[¶](#pymonetdb.sql.cursors.Cursor.arraysize)
Default value for the size parameter of [`fetchmany()`](#pymonetdb.sql.cursors.Cursor.fetchmany).
`binary`[¶](#pymonetdb.sql.cursors.Cursor.binary)
`close`()[¶](#pymonetdb.sql.cursors.Cursor.close)
Close the cursor now (rather than whenever __del__ is called). The cursor will be unusable from this point forward; an Error (or subclass) exception will be raised if any operation is attempted with the cursor.
`debug`(*query*, *fname*, *sample=-1*)[¶](#pymonetdb.sql.cursors.Cursor.debug)
Locally debug a given Python UDF function in a SQL query using the PDB debugger. Optionally can run on only a sample of the input data, for faster data export.
`execute`(*operation: str*, *parameters: Optional[Dict[KT*, *VT]] = None*)[¶](#pymonetdb.sql.cursors.Cursor.execute)
Prepare and execute a database operation (query or command). Parameters may be provided as mapping and will be bound to variables in the operation.
`executemany`(*operation*, *seq_of_parameters*)[¶](#pymonetdb.sql.cursors.Cursor.executemany)
Prepare a database operation (query or command) and then execute it against all parameter sequences or mappings found in the sequence seq_of_parameters.
It will return the number or rows affected
`export`(*query*, *fname*, *sample=-1*, *filespath='./'*)[¶](#pymonetdb.sql.cursors.Cursor.export)
`fetchall`()[¶](#pymonetdb.sql.cursors.Cursor.fetchall)
Fetch all remaining rows of a query result, returning them as a sequence of sequences (e.g. a list of tuples).
A `ProgrammingError` is raised if the previous call to .execute*() did not produce any result set or no call was issued yet.
`fetchmany`(*size=None*)[¶](#pymonetdb.sql.cursors.Cursor.fetchmany)
Fetch the next set of rows of a query result, returning a sequence of sequences (e.g. a list of tuples). An empty sequence is returned when no more rows are available.
The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor’s arraysize determines the number of rows to be fetched.
A `ProgrammingError` is raised if the previous call to .execute*() did not produce any result set or no call was issued yet.
`fetchone`()[¶](#pymonetdb.sql.cursors.Cursor.fetchone)
Fetch the next row of a query result set, returning a single sequence, or None when no more data is available.
`get_binary`() → int[¶](#pymonetdb.sql.cursors.Cursor.get_binary)
`get_maxprefetch`() → int[¶](#pymonetdb.sql.cursors.Cursor.get_maxprefetch)
`get_replysize`() → int[¶](#pymonetdb.sql.cursors.Cursor.get_replysize)
`maxprefetch`[¶](#pymonetdb.sql.cursors.Cursor.maxprefetch)
`next`()[¶](#pymonetdb.sql.cursors.Cursor.next)
`replysize`[¶](#pymonetdb.sql.cursors.Cursor.replysize)
`scroll`(*value*, *mode='relative'*)[¶](#pymonetdb.sql.cursors.Cursor.scroll)
Scroll the cursor in the result set to a new position according to mode.
If mode is ‘relative’ (default), value is taken as offset to the current position in the result set, if set to ‘absolute’,
value states an absolute target position.
An IndexError is raised in case a scroll operation would leave the result set.
`set_binary`(*level: int*)[¶](#pymonetdb.sql.cursors.Cursor.set_binary)
`set_maxprefetch`(*maxprefetch: int*)[¶](#pymonetdb.sql.cursors.Cursor.set_maxprefetch)
`set_replysize`(*replysize: int*)[¶](#pymonetdb.sql.cursors.Cursor.set_replysize)
`setinputsizes`(*sizes*)[¶](#pymonetdb.sql.cursors.Cursor.setinputsizes)
This method would be used before the .execute*() method is invoked to reserve memory. This implementation doesn’t use this.
`setoutputsize`(*size*, *column=None*)[¶](#pymonetdb.sql.cursors.Cursor.setoutputsize)
Set a column buffer size for fetches of large columns This implementation doesn’t use this
`used_binary_protocol`() → bool[¶](#pymonetdb.sql.cursors.Cursor.used_binary_protocol)
Pymonetdb-specific. Return True if the last fetch{one,many,all}
for the current statement made use of the binary protocol.
Primarily used for testing.
Note that the binary protocol is never used for the first few rows of a result set. Exactly when it kicks in depends on the replysize setting.
#### Type conversion[¶](#module-pymonetdb.sql.monetize)
functions for converting python objects to monetdb SQL format. If you want to add support for a specific type you should add a function as a value to the mapping dict and the datatype as key.
`pymonetdb.sql.monetize.``convert`(*data*)[¶](#pymonetdb.sql.monetize.convert)
Return the appropriate convertion function based upon the python type.
`pymonetdb.sql.monetize.``monet_bool`(*data*)[¶](#pymonetdb.sql.monetize.monet_bool)
returns “true” or “false”
`pymonetdb.sql.monetize.``monet_bytes`(*data*)[¶](#pymonetdb.sql.monetize.monet_bytes)
converts bytes to string
`pymonetdb.sql.monetize.``monet_date`(*data*)[¶](#pymonetdb.sql.monetize.monet_date)
returns a casted date
`pymonetdb.sql.monetize.``monet_datetime`(*data*)[¶](#pymonetdb.sql.monetize.monet_datetime)
returns a casted timestamp
`pymonetdb.sql.monetize.``monet_escape`(*data*)[¶](#pymonetdb.sql.monetize.monet_escape)
returns an escaped string
`pymonetdb.sql.monetize.``monet_none`(*_*)[¶](#pymonetdb.sql.monetize.monet_none)
returns a NULL string
`pymonetdb.sql.monetize.``monet_time`(*data*)[¶](#pymonetdb.sql.monetize.monet_time)
returns a casted time
`pymonetdb.sql.monetize.``monet_timedelta`(*data*)[¶](#pymonetdb.sql.monetize.monet_timedelta)
returns timedelta casted to interval seconds
`pymonetdb.sql.monetize.``monet_unicode`(*data*)[¶](#pymonetdb.sql.monetize.monet_unicode)
functions for converting monetdb SQL fields to Python objects
`pymonetdb.sql.pythonize.``Binary`(*data*)[¶](#pymonetdb.sql.pythonize.Binary)
Convert to wraps binary data
`pymonetdb.sql.pythonize.``DateFromTicks`(*ticks*)[¶](#pymonetdb.sql.pythonize.DateFromTicks)
Convert ticks to python Date
`pymonetdb.sql.pythonize.``TimeFromTicks`(*ticks*)[¶](#pymonetdb.sql.pythonize.TimeFromTicks)
Convert ticks to python Time
`pymonetdb.sql.pythonize.``TimeTzFromTicks`(*ticks*)[¶](#pymonetdb.sql.pythonize.TimeTzFromTicks)
Convert ticks to python Time
`pymonetdb.sql.pythonize.``TimestampFromTicks`(*ticks*)[¶](#pymonetdb.sql.pythonize.TimestampFromTicks)
Convert ticks to python Timestamp
`pymonetdb.sql.pythonize.``TimestampTzFromTicks`(*ticks*)[¶](#pymonetdb.sql.pythonize.TimestampTzFromTicks)
Convert ticks to python Timestamp
`pymonetdb.sql.pythonize.``convert`(*data*, *type_code*)[¶](#pymonetdb.sql.pythonize.convert)
Calls the appropriate convertion function based upon the python type
`pymonetdb.sql.pythonize.``oid`(*data*)[¶](#pymonetdb.sql.pythonize.oid)
represents an object identifier
For now we will just return the string representation just like mclient does.
`pymonetdb.sql.pythonize.``py_bool`(*data*)[¶](#pymonetdb.sql.pythonize.py_bool)
return python boolean
`pymonetdb.sql.pythonize.``py_bytes`(*data: str*)[¶](#pymonetdb.sql.pythonize.py_bytes)
Returns a bytes (py3) or string (py2) object representing the input blob.
`pymonetdb.sql.pythonize.``py_date`(*data*)[¶](#pymonetdb.sql.pythonize.py_date)
Returns a python Date
`pymonetdb.sql.pythonize.``py_day_interval`(*data: str*) → int[¶](#pymonetdb.sql.pythonize.py_day_interval)
Returns a python number of days where data represents a value of MonetDB’s INTERVAL DAY type which resembles a stringified decimal.
`pymonetdb.sql.pythonize.``py_sec_interval`(*data: str*) → datetime.timedelta[¶](#pymonetdb.sql.pythonize.py_sec_interval)
Returns a python TimeDelta where data represents a value of MonetDB’s INTERVAL SECOND type which resembles a stringified decimal.
`pymonetdb.sql.pythonize.``py_time`(*data*)[¶](#pymonetdb.sql.pythonize.py_time)
returns a python Time
`pymonetdb.sql.pythonize.``py_timestamp`(*data*)[¶](#pymonetdb.sql.pythonize.py_timestamp)
Returns a python Timestamp
`pymonetdb.sql.pythonize.``py_timestamptz`(*data*)[¶](#pymonetdb.sql.pythonize.py_timestamptz)
Returns a python Timestamp where data contains a tz code
`pymonetdb.sql.pythonize.``py_timetz`(*data*)[¶](#pymonetdb.sql.pythonize.py_timetz)
returns a python Time where data contains a tz code
`pymonetdb.sql.pythonize.``strip`(*data*)[¶](#pymonetdb.sql.pythonize.strip)
returns a python string, with chopped off quotes,
and replaced escape characters
#### MAPI[¶](#module-pymonetdb.mapi)
This is the python implementation of the mapi protocol.
*class* `pymonetdb.mapi.``Connection`[¶](#pymonetdb.mapi.Connection)
Bases: `object`
MAPI (low level MonetDB API) connection
`binary_cmd`(*operation: str*) → memoryview[¶](#pymonetdb.mapi.Connection.binary_cmd)
put a mapi command on the line, with a binary response.
returns a memoryview that can only be used until the next operation on this Connection object.
`cmd`(*operation: str*)[¶](#pymonetdb.mapi.Connection.cmd)
put a mapi command on the line
`connect`(*database: str, username: str, password: str, language: str, hostname: Optional[str] = None, port: Optional[int] = None, unix_socket=None, connect_timeout=-1, handshake_options_callback: Callable[[bool], List[HandshakeOption]] = <function Connection.<lambda>>*)[¶](#pymonetdb.mapi.Connection.connect)
setup connection to MAPI server
unix_socket is used if hostname is not defined.
`disconnect`()[¶](#pymonetdb.mapi.Connection.disconnect)
disconnect from the monetdb server
`set_downloader`(*downloader: Downloader*)[¶](#pymonetdb.mapi.Connection.set_downloader)
Register the given Downloader, or None to deregister
`set_reply_size`(*size*)[¶](#pymonetdb.mapi.Connection.set_reply_size)
Set the amount of rows returned by the server.
args:
size: The number of rows
`set_uploader`(*uploader: Uploader*)[¶](#pymonetdb.mapi.Connection.set_uploader)
Register the given Uploader, or None to deregister
*class* `pymonetdb.mapi.``HandshakeOption`(*level*, *name*, *fallback*, *value*)[¶](#pymonetdb.mapi.HandshakeOption)
Bases: `object`
Option that can be set during the MAPI handshake
Should be sent as <name>=<val>, where <val> is value converted to int.
The level is used to determine if the server supports this option.
The fallback is a function-like object that can be called with the value (not converted to an integer) as a parameter.
Field sent can be used to keep track of whether the option has been sent.
`pymonetdb.mapi.``handle_error`(*error*)[¶](#pymonetdb.mapi.handle_error)
Return exception matching error code.
args:
error (str): error string, potentially containing mapi error code returns:
tuple (Exception, formatted error): returns OperationalError if unknown error or no error code in string
`pymonetdb.mapi.``mapi_url_options`(*possible_mapi_url: str*) → Dict[str, str][¶](#pymonetdb.mapi.mapi_url_options)
Try to parse the argument as a MAPI URL and return a Dict of url options
Return empty dict if it’s not a MAPI URL.
#### File Uploads and Downloads[¶](#file-uploads-and-downloads)
Classes related to file transfer requests as used by COPY INTO ON CLIENT.
*class* `pymonetdb.filetransfer.``Upload`(*mapi: MapiConnection*)[¶](#pymonetdb.filetransfer.Upload)
Represents a request from the server to upload data to the server. It is passed to the Uploader registered by the application, which for example might retrieve the data from a file on the client system. See pymonetdb.sql.connections.Connection.set_uploader().
Use the method send_error() to refuse the upload, binary_writer() to get a binary file object to write to, or text_writer() to get a text-mode file object to write to.
Implementations should be VERY CAREFUL to validate the file name before opening any files on the client system!
`is_cancelled`() → bool[¶](#pymonetdb.filetransfer.Upload.is_cancelled)
Returns true if the server has cancelled the upload.
`has_been_used`() → bool[¶](#pymonetdb.filetransfer.Upload.has_been_used)
Returns true if .send_error(), .text_writer() or .binary_writer() have been called.
`set_chunk_size`(*size: int*)[¶](#pymonetdb.filetransfer.Upload.set_chunk_size)
After every CHUNK_SIZE bytes, the server gets the opportunity to cancel the rest of the upload. Defaults to 1 MiB.
`send_error`(*message: str*) → None[¶](#pymonetdb.filetransfer.Upload.send_error)
Tell the server the requested upload has been refused
`binary_writer`() → io.BufferedIOBase[¶](#pymonetdb.filetransfer.Upload.binary_writer)
Returns a binary file-like object. All data written to it is uploaded to the server.
`text_writer`() → io.TextIOBase[¶](#pymonetdb.filetransfer.Upload.text_writer)
Returns a text-mode file-like object. All text written to it is uploaded to the server. DOS/Windows style line endings (CR LF, \r \n) are automatically rewritten to single \n’s.
`close`()[¶](#pymonetdb.filetransfer.Upload.close)
End the upload succesfully
*class* `pymonetdb.filetransfer.``Uploader`[¶](#pymonetdb.filetransfer.Uploader)
Base class for upload hooks. Instances of subclasses of this class can be registered using pymonetdb.Connection.set_uploader(). Every time an upload request is received, an Upload object is created and passed to this objects
.handle_upload() method.
If the server cancels the upload halfway, the .cancel() methods is called and all further data written is ignored.
`handle_upload`(*upload: pymonetdb.filetransfer.uploads.Upload*, *filename: str*, *text_mode: bool*, *skip_amount: int*)[¶](#pymonetdb.filetransfer.Uploader.handle_upload)
Called when an upload request is received. Implementations should either send an error using upload.send_error(), or request a writer using upload.text_writer() or upload.binary_writer(). All data written to the writer will be sent to the server.
Parameter ‘filename’ is the file name used in the COPY INTO statement.
Parameter ‘text_mode’ indicates whether the server requested a text file or a binary file. In case of a text file, ‘skip_amount’ indicates the number of lines to skip. In binary mode, ‘skip_amount’ is always 0.
SECURITY NOTE! Make sure to carefully validate the file name before opening files on the file system. Otherwise, if an adversary has taken control of the network connection or of the server, they can use file upload requests to read arbitrary files from your computer
(../../)
`cancel`()[¶](#pymonetdb.filetransfer.Uploader.cancel)
Optional method called when the server cancels the upload.
*class* `pymonetdb.filetransfer.``Download`(*mapi: pymonetdb.mapi.Connection*)[¶](#pymonetdb.filetransfer.Download)
Represents a request from the server to download data from the server. It is passed to the Downloader registered by the application, which for example might write the data to a file on the client system. See pymonetdb.Connection.set_downloader().
Use the method send_error() to refuse the download, binary_reader() to get a binary file object to read bytes from, or text_reader() to get a text-mode file object to read text from.
Implementations should be EXTREMELY CAREFUL to validate the file name before opening and writing to any files on the client system!
`send_error`(*message: str*) → None[¶](#pymonetdb.filetransfer.Download.send_error)
Tell the server the requested download is refused
`binary_reader`()[¶](#pymonetdb.filetransfer.Download.binary_reader)
Returns a binary file-like object to read the downloaded data from.
`text_reader`()[¶](#pymonetdb.filetransfer.Download.text_reader)
Returns a text mode file-like object to read the downloaded data from.
`close`()[¶](#pymonetdb.filetransfer.Download.close)
End the download succesfully. Any unconsumed data will be discarded.
*class* `pymonetdb.filetransfer.``Downloader`[¶](#pymonetdb.filetransfer.Downloader)
Base class for download hooks. Instances of subclasses of this class can be registered using pymonetdb.Connection.set_downloader(). Every time a download request arrives, a Download object is created and passed to this objects .handle_download() method.
SECURITY NOTE! Make sure to carefully validate the file name before opening files on the file system. Otherwise, if an adversary has taken control of the network connection or of the server, they can use download requests to OVERWRITE ARBITRARY FILES on your computer
`handle_download`(*download: pymonetdb.filetransfer.downloads.Download*, *filename: str*, *text_mode: bool*)[¶](#pymonetdb.filetransfer.Downloader.handle_download)
Called when a download request is received. Implementations should either send an error using download.send_error(), or request a reader using download.text_reader() or download.binary_reader().
Parameter ‘filename’ is the file name used in the COPY INTO statement.
Parameter ‘text_mode’ indicates whether the server requested text or binary mode.
SECURITY NOTE! Make sure to carefully validate the file name before opening files on the file system. Otherwise, if an adversary has taken control of the network connection or of the server, they can use file download requests to overwrite arbitrary files on your computer.
(../../)
*class* `pymonetdb.filetransfer.``SafeDirectoryHandler`(*dir*, *encoding: Optional[str] = None*, *newline: Optional[str] = None*, *compression=True*)[¶](#pymonetdb.filetransfer.SafeDirectoryHandler)
File transfer handler which uploads and downloads files from a given directory, taking care not to allow access to files outside that directory.
Instances of this class can be registered using the pymonetb.Connection’s set_uploader() and set_downloader() methods.
When downloading text files, the downloaded text is converted according to the encoding and newline parameters, if present. Valid values for encoding are any encoding known to Python, or None. Valid values for newline are “\n”, “\r\n” or None. None means to use the system default.
For binary up- and downloads, no conversions are applied.
When uploading text files, the encoding parameter indicates how the text is read and newline is mostly ignored: both \n and \r\n are valid line endings. The exception is that because the server expects its input to be \n-terminated UTF-8 text, if you set encoding to “utf-8”
and newline to “\n”, text mode transfers are performed as binary, which improves performance. For uploads, only do this if you are absolutely,
positively sure that all files in the directory are actually valid UTF-8 encoded and have Unix line endings.
If compression is set to True, which is the default, the SafeDirectoryHandler will automatically compress and decompress files with extensions .gz, .bz2, .xz and .lz4. Note that the first three algorithms are built into Python, but LZ4 only works if the lz4.frame module is available.
`handle_upload`(*upload: pymonetdb.filetransfer.uploads.Upload*, *filename: str*, *text_mode: bool*, *skip_amount: int*)[¶](#pymonetdb.filetransfer.SafeDirectoryHandler.handle_upload)
| Meta private: | |
`handle_download`(*download: pymonetdb.filetransfer.downloads.Download*, *filename: str*, *text_mode: bool*)[¶](#pymonetdb.filetransfer.SafeDirectoryHandler.handle_download)
Called when a download request is received. Implementations should either send an error using download.send_error(), or request a reader using download.text_reader() or download.binary_reader().
Parameter ‘filename’ is the file name used in the COPY INTO statement.
Parameter ‘text_mode’ indicates whether the server requested text or binary mode.
SECURITY NOTE! Make sure to carefully validate the file name before opening files on the file system. Otherwise, if an adversary has taken control of the network connection or of the server, they can use file download requests to overwrite arbitrary files on your computer.
(../../)
#### MonetDB remote control[¶](#module-pymonetdb.control)
*class* `pymonetdb.control.``Control`(*hostname=None*, *port=50000*, *passphrase=None*, *unix_socket=None*, *connect_timeout=-1*)[¶](#pymonetdb.control.Control)
Bases: `object`
Use this module to manage your MonetDB databases. You can create, start,
stop, lock, unlock, destroy your databases and request status information.
`create`(*database_name*)[¶](#pymonetdb.control.Control.create)
Initialises a new database or multiplexfunnel in the MonetDB Server.
A database created with this command makes it available for use,
however in maintenance mode (see pymonetdb lock).
`defaults`()[¶](#pymonetdb.control.Control.defaults)
`destroy`(*database_name*)[¶](#pymonetdb.control.Control.destroy)
Removes the given database, including all its data and logfiles. Once destroy has completed, all data is lost.
Be careful when using this command.
`get`(*database_name*)[¶](#pymonetdb.control.Control.get)
gets value for property for the given database, or retrieves all properties for the given database
`inherit`(*database_name*, *property_*)[¶](#pymonetdb.control.Control.inherit)
unsets property, reverting to its inherited value from the default configuration for the given database
`kill`(*database_name*)[¶](#pymonetdb.control.Control.kill)
Kills the given database, if the MonetDB Database Server is running. Note: killing a database should only be done as last resort to stop a database. A database being killed may end up with data loss.
`lock`(*database_name*)[¶](#pymonetdb.control.Control.lock)
Puts the given database in maintenance mode. A database under maintenance can only be connected to by the DBA.
A database which is under maintenance is not started automatically. Use the “release” command to bring the database back for normal usage.
`neighbours`()[¶](#pymonetdb.control.Control.neighbours)
`release`(*database_name*)[¶](#pymonetdb.control.Control.release)
Brings back a database from maintenance mode. A released database is available again for normal use. Use the
“lock” command to take a database under maintenance.
`rename`(*old*, *new*)[¶](#pymonetdb.control.Control.rename)
`set`(*database_name*, *property_*, *value*)[¶](#pymonetdb.control.Control.set)
sets property to value for the given database for a list of properties, use pymonetdb get all
`start`(*database_name*)[¶](#pymonetdb.control.Control.start)
Starts the given database, if the MonetDB Database Server is running.
`status`(*database_name=False*)[¶](#pymonetdb.control.Control.status)
Shows the state of a given glob-style database match, or all known if none given. Instead of the normal mode, a long and crash mode control what information is displayed.
`stop`(*database_name*)[¶](#pymonetdb.control.Control.stop)
Stops the given database, if the MonetDB Database Server is running.
`pymonetdb.control.``isempty`(*result*)[¶](#pymonetdb.control.isempty)
raises an exception if the result is not empty
`pymonetdb.control.``parse_statusline`(*line*)[¶](#pymonetdb.control.parse_statusline)
parses a sabdb format status line. Support v1 and v2.
#### pymonetdb Exceptions[¶](#module-pymonetdb.exceptions)
MonetDB Python API specific exceptions
*exception* `pymonetdb.exceptions.``DataError`[¶](#pymonetdb.exceptions.DataError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised for errors that are due to problems with the processed data like division by zero, numeric value out of range, etc. It must be a subclass of DatabaseError.
*exception* `pymonetdb.exceptions.``DatabaseError`[¶](#pymonetdb.exceptions.DatabaseError)
Bases: [`pymonetdb.exceptions.Error`](#pymonetdb.exceptions.Error)
Exception raised for errors that are related to the database. It must be a subclass of Error.
*exception* `pymonetdb.exceptions.``Error`[¶](#pymonetdb.exceptions.Error)
Bases: `Exception`
Exception that is the base class of all other error exceptions. You can use this to catch all errors with one single ‘except’ statement. Warnings are not considered errors and thus should not use this class as base. It must be a subclass of the Python StandardError (defined in the module exceptions).
*exception* `pymonetdb.exceptions.``IntegrityError`[¶](#pymonetdb.exceptions.IntegrityError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised when the relational integrity of the database is affected, e.g. a foreign key check fails. It must be a subclass of DatabaseError.
*exception* `pymonetdb.exceptions.``InterfaceError`[¶](#pymonetdb.exceptions.InterfaceError)
Bases: [`pymonetdb.exceptions.Error`](#pymonetdb.exceptions.Error)
Exception raised for errors that are related to the database interface rather than the database itself. It must be a subclass of Error.
*exception* `pymonetdb.exceptions.``InternalError`[¶](#pymonetdb.exceptions.InternalError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised when the database encounters an internal error, e.g. the cursor is not valid anymore, the transaction is out of sync, etc. It must be a subclass of DatabaseError.
*exception* `pymonetdb.exceptions.``NotSupportedError`[¶](#pymonetdb.exceptions.NotSupportedError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised in case a method or database API was used which is not supported by the database, e.g. requesting a .rollback() on a connection that does not support transaction or has transactions turned off. It must be a subclass of DatabaseError.
*exception* `pymonetdb.exceptions.``OperationalError`[¶](#pymonetdb.exceptions.OperationalError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised for errors that are related to the database’s operation and not necessarily under the control of the programmer, e.g. an unexpected disconnect occurs,
the data source name is not found, a transaction could not be processed, a memory allocation error occurred during processing, etc. It must be a subclass of DatabaseError.
*exception* `pymonetdb.exceptions.``ProgrammingError`[¶](#pymonetdb.exceptions.ProgrammingError)
Bases: [`pymonetdb.exceptions.DatabaseError`](#pymonetdb.exceptions.DatabaseError)
Exception raised for programming errors, e.g. table not found or already exists, syntax error in the SQL statement, wrong number of parameters specified, etc. It must be a subclass of DatabaseError.
*exception* `pymonetdb.exceptions.``Warning`[¶](#pymonetdb.exceptions.Warning)
Bases: `Exception`
Exception raised for important warnings like data truncations while inserting, etc. It must be a subclass of the Python StandardError (defined in the module exceptions).
### Development[¶](#development)
#### Github[¶](#github)
We maintain pymonetdb on [GitHub](https://github.com/gijzelaerr/pymonetdb).
If you have problems with pymonetdb, please raise an issue in the
[issue tracker](https://github.com/gijzelaerr/pymonetdb/issues). Even better is if you have a solution to the problem! In that case, you can make our lives easier by following these steps:
> * Fork our repository on GitHub
> * Add tests that will fail because of the problem
> * Fix the problem
> * Run the test suite again
> * Commit all changes to your repository
> * Issue a GitHub pull request.
Also, we try to be pep8 compatible as much as possible, where possible and reasonable.
#### Test suite[¶](#test-suite)
pymonetdb comes with a test suite to verify that the code works and make development easier.
##### Prepare test databases[¶](#prepare-test-databases)
Most tests use an existing MonetDB database that you must prepare beforehand.
By default they try to connect to a database named “demo” but this can be configured otherwise, see below.
Some of the tests rely on a running MonetDB daemon, to test creating and destroying new databases. This daemon also needs to be prepared beforehand, and configured to allow control connections.
Alternatively, you may disable the control tests by setting the environment variable TSTCONTROL=off.
The commands below assume an environment without any running MonetDB processes.
Create a test database farm, e.g. “/tmp/pymonetdbtest”, and the “demo”
database:
```
$ monetdbd create /tmp/pymonetdbtest
$ monetdbd start /tmp/pymonetdbtest
$ monetdb create demo
$ monetdb release demo
```
If you want to run the control tests (in tests/test_control.py), you need to set a passphrase and enable remote control:
```
$ monetdbd set control=yes /tmp/pymonetdbtest
$ monetdbd set passphrase=testdb /tmp/pymonetdbtest
$ monetdbd stop /tmp/pymonetdbtest
$ monetdbd start /tmp/pymonetdbtest
```
**Note 1:** Test databases created by test_control.py are cleaned up after the control tests have finished. However, the demo database and the MonetDB daemon itself are neither stopped nor destroyed.
**Note 2:** The above commands are also in the file tests/initdb.sh. Once the database farm has been created, you can use that script to do the remaining work:
```
$ tests/initdb.sh demo /tmp/pymonetdbtest
```
**WARNING:** initdb.sh will destroy the given database demo *WITHOUT*
asking for confirmation!
##### Run tests[¶](#run-tests)
There are many ways to run the tests.
Below we list several often-used commands.
The commands should be run in the root directory of the pymonetdb source directory.
* With Python unittest:
```
$ python -m unittest # to run all tests
$ python -m unittest -f # to run all tests but stop after the first failure
$ python -m unittest -v # to run all tests and get information about individual test
$ python -m unittest -v tests.test_policy # to run all tests of the module "tests.test_policy"
$ python -m unittest -v -k test_fetch # to run the sub-test set "test_fetch*"
```
* With pytest:
```
$ pytest # to run all tests
$ pytest -v # to run all tests and get information about individual test
$ pytest -v tests/test_oid.py # to run one test file
```
* With make:
```
$ make test
```
Note: make test creates a venv in which it installs and runs pytest. If you get the error “Could not install packages due to an OSError: [Errno 39]
Directory not empty: ‘_internal’”, it is probably because your pymonetdb source is in a Vagrant shared folder. A simple workaround is to move your pymonetdb source to a local folder on your VM. See also [vagrant](https://github.com/hashicorp/vagrant/issues/12057).
* With tox:
```
$ pip install tox; tox
```
Note: If it is not listed there, you must add your Python version to the envlist in the tox.ini file.
##### Environment variables[¶](#environment-variables)
Several environment variables are defined in tests/util.py.
Many of them are self-explanatory.
Here we just highlight a few:
* TSTDB is the name of the preexisting database used by most of the tests.
TSTHOSTNAME, TSTUSERNAME, TSTPASSWORD and MAPIPORT control the other connection parameters. Note that for historical reasons it is MAPIPORT, not TSTPORT.
* TSTPASSPHRASE is the Merovingian passphrase you must set to run the control test (see [Prepare test databases](#prepare-test-databases) above).
* Some tests are skipped unless you set TSTFULL to true, e.g.:
```
$ TSTFULL=true python3 -m unittest -v tests/test_control.py
```
* TSTCONTROL is used to control the tests in test_control.py. The default tcp,local means run the tests over TCP/IP (e.g. on port 50000) and the Unix domain socket (e.g. “/tmp/s.merovingian.50000”). When you run MonetDB in,
e.g., a Docker container, you can turn off the tests over the Unix socket using TSTCONTROL=tcp. If you want to turn off all Merovingian tests, you can use TSTCONTROL=off (actually, any string other than “tcp” and “local”
will do):
```
$ TSTFULL=true TSTCONTROL=tcp python3 -m unittest -v tests/test_control.py
```
* TSTREPLYSIZE, TSTMAXPREFETCH and TSTBINARY control the size and format of the result set transfer (see [Result set batch size](index.html#batch-size)). Check out the tests in test_policy.py for examples of implemented data transfer policies and how setting the variables replysize, maxprefetch and binary affects those policies.
Indices and tables[¶](#indices-and-tables)
---
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
accent | hex | Erlang | API Reference
===
Modules
---
[Accent.Case](Accent.Case.html)
[Accent.Case.Camel](Accent.Case.Camel.html)
Converts the given binary or atom to `camelCase`.
[Accent.Case.Pascal](Accent.Case.Pascal.html)
Converts the given binary or atom to `PascalCase`.
[Accent.Case.Snake](Accent.Case.Snake.html)
Converts the given binary or atom to `snake_case`.
[Accent.Plug.Request](Accent.Plug.Request.html)
Transforms the keys of an HTTP request's params to the same or a different case.
[Accent.Plug.Response](Accent.Plug.Response.html)
Transforms the keys of an HTTP response to the case requested by the client.
Accent.Case behaviour
===
Summary
===
[Functions](#functions)
---
[convert(map, transformer)](#convert/2)
Convert the keys of a list based on the provided transformer.
[Callbacks](#callbacks)
---
[call(arg1)](#c:call/1)
Converts a string or atom to the same or different case.
[Link to this section](#functions)
Functions
===
[Link to this section](#callbacks)
Callbacks
===
Accent.Case.Camel
===
Converts the given binary or atom to `camelCase`.
Summary
===
[Functions](#functions)
---
[call(atom)](#call/1)
Callback implementation for [`Accent.Case.call/1`](Accent.Case.html#c:call/1).
[Link to this section](#functions)
Functions
===
Accent.Case.Pascal
===
Converts the given binary or atom to `PascalCase`.
Summary
===
[Functions](#functions)
---
[call(atom)](#call/1)
Callback implementation for [`Accent.Case.call/1`](Accent.Case.html#c:call/1).
[Link to this section](#functions)
Functions
===
Accent.Case.Snake
===
Converts the given binary or atom to `snake_case`.
Summary
===
[Functions](#functions)
---
[call(atom)](#call/1)
Callback implementation for [`Accent.Case.call/1`](Accent.Case.html#c:call/1).
[Link to this section](#functions)
Functions
===
Accent.Plug.Request
===
Transforms the keys of an HTTP request's params to the same or a different case.
You can specify what case to convert keys to by passing in a `:transformer`
option.
Accent supports the following transformers out of the box:
* [`Accent.Case.Camel`](Accent.Case.Camel.html) (e.g. `camelCase`)
* [`Accent.Case.Pascal`](Accent.Case.Pascal.html) (e.g. `PascalCase`)
* [`Accent.Case.Snake`](Accent.Case.Snake.html) (e.g. `snake_case`)
If no transformer is provided then [`Accent.Case.Snake`](Accent.Case.Snake.html) will be used.
Please note that this plug will need to be executed after the request has been parsed.
Example
---
```
plug Plug.Parsers, parsers: [:urlencoded, :multipart, :json],
pass: ["*/*"],
json_decoder: Jason
plug Accent.Plug.Request, case: Accent.Case.Camel
```
Accent.Plug.Response
===
Transforms the keys of an HTTP response to the case requested by the client.
A client can request what case the keys are formatted in by passing the case as a header in the request. By default the header key is `Accent`. If the client does not request a case or requests an unsupported case then a default case defined by `:default_case` will be used. If no default case is provided then no conversion will happen. By default the supported cases are `camel`,
`pascal` and `snake`.
Options
---
* `:default_case` - module used to case the response when the client does not request a case or requests an unsupported case. When not provided then no conversation will happen for the above scenarios. Defaults to `nil`.
* `:header` - the HTTP header used to determine the case to convert the response body to before sending the response (default: `Accent`)
* `:json_codec` - module used to encode and decode JSON. The module is expected to define `decode/1` and `encode!/1` functions (required).
* `:supported_cases` - map that defines what cases a client can request. By default `camel`, `pascal` and `snake` are supported.
Examples
---
```
plug Accent.Plug.Response, default_case: Accent.Case.Snake,
header: "x-accent",
supported_cases: %{"pascal" => Accent.Case.Pascal},
json_codec: Jason
``` |
semiautomaticclassificationmanual-v5 | readthedoc | Unknown | Semi-Automatic Classification Plugin
Documentation
Release 5.3.6.1
<NAME>
Apr 13, 2018
Contents 2.1 Installation in Windows 32 bi... 3 2.2 Installation in Windows 64 bi... 5 2.3 Installation in Ubuntu Linu... 8 2.4 Installation in Debian Linu... 12 2.5 Installation in Mac O... 16 3.1 SCP men... 21 3.2 SCP Tool... 22 3.3 Working toolba... 23 3.4 SCP doc... 25 3.5 Main Interface Windo... 35 3.6 Spectral Signature Plo... 107 3.7 Scatter Plo... 112 3.8 SCP Edit Toolba... 116 4.1 Basic Definition... 118 4.2 Supervised Classification Definition... 126 4.3 Image conversion to reflectanc... 136 4.4 Conversion to Temperatur... 139 4.5 Reference... 141 5.1 Tutoria... 143 5.2 Tutoria... 153 5.3 NASA ARSET Webina... 170 6.1 Tutorial: Land Cover Signature Classificatio... 173
6.2 Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER . . . . . . . . . . . 182 7.1 Installation in VirtualBo... 203 8.1 Plugin installatio... 208 8.2 Pre processin... 210 8.3 Processin... 212
i
8.4 Warning... 213 8.5 Error... 213 8.6 Variou... 216 ii
CHAPTER 1
Introduction Developed by <NAME>, the Semi-Automatic Classification Plugin (SCP) is a free open source plugin for QGIS that allows for the semi-automatic classification (also known as supervised classification) of remote sensing images. It provides several tools for the download of free images, the preprocessing, the postprocessing, and the raster calculation (please see What can I do with the SCP? (page 216)).
The overall objective of SCP is to provide a set of intertwined tools for raster processing in order to make an automatic workflow and ease the land cover classification, which could be performed also by people whose main field is not remote sensing. The first version of the SCP was written by <NAME> in 2012 for the ACC Dar Project in order to create a tool for the classification of land cover in an affordable and automatic fashion (read this working paper). Following versions of SCP were developed as personal commitment to the remote sensing field and open source software. SCP version 5 (codename: Kourou) is developed in the frame of <NAME>’s PhD in Landscape and Environment at Sapienza University of Rome.
http://www.youtube.com/watch?v=K2mIa66e6h0 This user manual provides information about the Plugin Installation (page 3) of SPC and the The Interface of SCP (page 21), with detailed information about all the functions. In addition, the Brief Introduction to Remote Sensing (page 117) illustrates the basic concepts and definitions which are required for using the SCP.
Basic Tutorials (page 143) are available for learning the main functions of SCP and Thematic Tutorials (page 173)
illustrate specific tools.
You are kindly invited to contribute to SCP (see How to contribute to SCP (page 217)) and join the Facebook group or the Google+ Community. Several thousand people have already joined and posted hundreds of questions and comments. Also, please read the Frequently Asked Questions (page 207).
For more information and tutorials visit the official site
From GIS to Remote Sensing How to cite:
<NAME> (2016). Semi-Automatic Classification Plugin Documentation. DOI: http://dx.doi.org/10.13140/
RG.2.2.29474.02242/1 License:
Except where otherwise noted, content of this work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Semi-Automatic Classification Plugin is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 3 of the License. Semi-Automatic Classification Plugin is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Semi-Automatic Classification Plugin. If not, see http://www.gnu.org/licenses/.
Translators:
Language: Author
CHAPTER 2
Plugin Installation The Semi-Automatic Classification Plugin requires the installation of GDAL, OGR, NumPy, SciPy and Matplotlib
(already bundled with QGIS).
This chapter describes the installation of the Semi-Automatic Classification Plugin for the supported Operating Systems.
2.1 Installation in Windows 32 bit 2.1.1 QGIS download and installation
• Download the latest QGIS version 32 bit from here (the direct download of QGIS 2.8 from this link);
• Execute the QGIS installer with administrative rights, accepting the default configuration.
Now, QGIS 2 is installed.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.1.2 Semi-Automatic Classification Plugin installation
• Run QGIS 2;
• From the main menu, select Plugins > Manage and Install Plugins;
• From the menu All, select the Semi-Automatic Classification Plugin and click the button Install
plugin;
TIP: in case of issues or an offline installation is required see How to install the plugin manually?
(page 208) and How to install the plugin from the official SCP repository? (page 208).
• The SCP should be automatically activated; however, be sure that the Semi-Automatic Classification Plu-
gin is checked in the menu Installed (the restart of QGIS could be necessary to complete the SCP
installation);
2.1.3 Configuration of the plugin Now, the Semi-Automatic Classification Plugin is installed and a dock and a toolbar should be added to QGIS.
Also, a SCP menu is available in the Menu Bar of QGIS. It is possible to move the SCP Tools (page 22) and the dock according to your needs, as in the following image.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.2 Installation in Windows 64 bit 2.2.1 QGIS download and installation
• Download the latest QGIS version 64 bit from here (the direct download of QGIS 2.8 from this link);
• Execute the QGIS installer with administrative rights, accepting the default configuration.
Now, QGIS 2 is installed.
2.2.2 Semi-Automatic Classification Plugin installation
• Run QGIS 2;
• From the main menu, select Plugins > Manage and Install Plugins;
• From the menu All, select the Semi-Automatic Classification Plugin and click the button Install
plugin;
TIP: in case of issues or an offline installation is required see How to install the plugin manually?
(page 208) and How to install the plugin from the official SCP repository? (page 208).
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• The SCP should be automatically activated; however, be sure that the Semi-Automatic Classification Plu-
gin is checked in the menu Installed (the restart of QGIS could be necessary to complete the SCP
installation);
2.2.3 Configuration of the plugin Now, the Semi-Automatic Classification Plugin is installed and a dock and a toolbar should be added to QGIS.
Also, a SCP menu is available in the Menu Bar of QGIS. It is possible to move the SCP Tools (page 22) and the dock according to your needs, as in the following image.
The configuration of available RAM is recommended in order to reduce the processing time. From the SCP menu
(page 21) select Settings > Processing .
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 In the Settings (page 102), set the Available RAM (MB) to a value that should be half of the system RAM.
For instance, if your system has 2GB of RAM, set the value to 1024MB.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.3 Installation in Ubuntu Linux 2.3.1 QGIS download and installation
• Open a terminal and type:
sudo apt-get update
• Press Enter and type the user password;
• Type in a terminal:
sudo apt-get install qgis python-matplotlib python-scipy
• Press Enter and wait until the software is downloaded and installed.
Now, QGIS 2 is installed.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.3.2 Semi-Automatic Classification Plugin installation
• Run QGIS 2;
• From the main menu, select Plugins > Manage and Install Plugins;
• From the menu All, select the Semi-Automatic Classification Plugin and click the button Install
plugin;
TIP: in case of issues or an offline installation is required see How to install the plugin manually?
(page 208) and How to install the plugin from the official SCP repository? (page 208).
• The SCP should be automatically activated; however, be sure that the Semi-Automatic Classification Plu-
gin is checked in the menu Installed (the restart of QGIS could be necessary to complete the SCP
installation);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.3.3 Configuration of the plugin Now, the Semi-Automatic Classification Plugin is installed and a dock and a toolbar should be added to QGIS.
Also, a SCP menu is available in the Menu Bar of QGIS. It is possible to move the SCP Tools (page 22) and the dock according to your needs, as in the following image.
The configuration of available RAM is recommended in order to reduce the processing time. From the SCP menu
(page 21) select Settings > Processing .
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 In the Settings (page 102), set the Available RAM (MB) to a value that should be half of the system RAM.
For instance, if your system has 2GB of RAM, set the value to 1024MB.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.4 Installation in Debian Linux 2.4.1 QGIS download and installation
• Open a terminal and type:
sudo apt-get update
• Press Enter and type the user password;
• Type in a terminal:
sudo apt-get install qgis python-matplotlib python-scipy
• Press Enter and wait until the software is downloaded and installed.
Now, QGIS 2 is installed.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.4.2 Semi-Automatic Classification Plugin installation
• Run QGIS 2;
• From the main menu, select Plugins > Manage and Install Plugins;
• From the menu All, select the Semi-Automatic Classification Plugin and click the button Install
plugin;
TIP: in case of issues or an offline installation is required see How to install the plugin manually?
(page 208) and How to install the plugin from the official SCP repository? (page 208).
• The SCP should be automatically activated; however, be sure that the Semi-Automatic Classification Plu-
gin is checked in the menu Installed (the restart of QGIS could be necessary to complete the SCP
installation);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.4.3 Configuration of the plugin Now, the Semi-Automatic Classification Plugin is installed and a dock and a toolbar should be added to QGIS.
Also, a SCP menu is available in the Menu Bar of QGIS. It is possible to move the SCP Tools (page 22) and the dock according to your needs, as in the following image.
The configuration of available RAM is recommended in order to reduce the processing time. From the SCP menu
(page 21) select Settings > Processing .
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 In the Settings (page 102), set the Available RAM (MB) to a value that should be half of the system RAM.
For instance, if your system has 2GB of RAM, set the value to 1024MB.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.5 Installation in Mac OS 2.5.1 QGIS download and installation
• Download and install the latest version of QGIS and GDAL from here .
• In addition, download and install the python modules Numpy, Scipy, and Matplotlib from this link .
Now, QGIS 2 is installed.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.5.2 Semi-Automatic Classification Plugin installation
• Run QGIS 2;
• From the main menu, select Plugins > Manage and Install Plugins;
• From the menu All, select the Semi-Automatic Classification Plugin and click the button Install
plugin;
TIP: in case of issues or an offline installation is required see How to install the plugin manually?
(page 208) and How to install the plugin from the official SCP repository? (page 208).
• The SCP should be automatically activated; however, be sure that the Semi-Automatic Classification Plu-
gin is checked in the menu Installed (the restart of QGIS could be necessary to complete the SCP
installation);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 2.5.3 Configuration of the plugin Now, the Semi-Automatic Classification Plugin is installed and a dock and a toolbar should be added to QGIS.
Also, a SCP menu is available in the Menu Bar of QGIS. It is possible to move the SCP Tools (page 22) and the dock according to your needs, as in the following image.
The configuration of available RAM is recommended in order to reduce the processing time. From the SCP menu
(page 21) select Settings > Processing .
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 In the Settings (page 102), set the Available RAM (MB) to a value that should be half of the system RAM.
For instance, if your system has 2GB of RAM, set the value to 1024MB.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
CHAPTER 3
The Interface of SCP 3.1 SCP menu
Fig. 3.1: SCP menu
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 The SCP menu allows for the selection of the main functions of the Main Interface Window (page 35), the Spectral Signature Plot (page 107), and the Scatter Plot (page 112).
• Band set (page 96);
• Download images (page 37);
• Tools (page 51);
• Preprocessing (page 64);
• Postprocessing (page 78);
• Band calc (page 92);
• Spectral Signature Plot (page 107);
• Scatter Plot (page 112);
• Batch (page 98);
• Settings (page 102);
• User manual: open the online user manual in a web browser;
• Online help: open the Online help in a web browser; also, a Facebook group and a Google+ Commu-
nity are available for sharing information and asking for help about SCP;
• Show plugin: show all the SCP toolbars and dock if previously hidden;
3.2 SCP Tools
Fig. 3.2: SCP Tools The toolbar SCP Tools allows for the selection of the main functions of the Main Interface Window (page 35), the Spectral Signature Plot (page 107), and the Scatter Plot (page 112).
• Band set (page 96);
• Download images (page 37);
• Tools (page 51);
• Preprocessing (page 64);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Postprocessing (page 78);
• Band calc (page 92);
• Spectral Signature Plot (page 107);
• Scatter Plot (page 112);
• Batch (page 98);
• Settings (page 102);
• User manual: open the online user manual in a web browser;
• Online help: open the Online help in a web browser; also, a Facebook group and a Google+ Commu-
nity are available for sharing information and asking for help about SCP;
3.3 Working toolbar
Fig. 3.3: Working toolbar The Working toolbar allows for displaying the Input image (page 28), creating temporary ROIs and classification previews.
The functions are described in detail in the following paragraphs, using these conventions:
= Input date
[I] = Input text
= List
= Input number
= Optional
= Configuration stored in the active project of QGIS
= Configuration stored in QGIS registry
= Slider
= Table 3.3.1 Image control
• : show the Main Interface Window (page 35);
• : zoom the map to the extent of Input image (page 28);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• RGB= : use the button to show/hide the Input image (page 28) in the map; from the list
select a Color Composite (page 123) that is applied to the Input image (page 28); new color composites can
be entered typing the band numbers separated by - or ; or , (e.g. RGB = 4-3-2 or RGB = 4;3;2 or RGB =
4,3,2);
• : display the input image stretching the minimum and maximum values according to cumulative count
of current map extent;
• : display the input image stretching the minimum and maximum values according to standard deviation
of current map extent;
3.3.2 Temporary ROI A temporary ROI is a temporary polygon displayed in the map, which can be saved permanently in the Training input (page 28). A temporary ROI can be drawn manually or using a Region Growing Algorithm (page 126).
• : zoom the map to the extent of temporary ROI;
• ROI: use the button to show/hide the temporary ROI and the Training input in the map;
• : activate the pointer to create a temporary ROI by drawing a polygon in the map; left click on the map
to define the ROI vertices and right click to define the last vertex closing the polygon; press the keyboard
button CTRL to add a multipart polygon; press the keyboard buttons CTRL + Z for removing the last
multipart polygon;
• : activate the pointer to create a temporary ROI using the region growing algorithm; left click on the
map for creating the ROI; right click on the map for displaying the spectral signature of a pixel of the
Input image (page 28) in the Spectral Signature Plot (page 107); press the keyboard button CTRL to add a
multipart polygon (new parts are not created if overlapping to other parts); press the keyboard buttons CTRL
+ Z for removing the last multipart polygon;
• : create a temporary ROI using the region growing algorithm at the same seed pixel as the previous
one; it is useful after changing the region growing parameters;
Region growing parameters: the following parameters are required for the ROI creation using a region growing algorithm o
• Dist : set the interval which defines the maximum spectral distance between the seed pixel
and the surrounding pixels (in radiometry unit);
• Min : set the minimum area of a ROI (in pixel unit); this setting overrides the Range
radius until the minimum ROI size is reached; if Rapid ROI on band is checked, then ROI
will have at least the size defined Min ROI size; if Rapid ROI on band is unchecked, then
ROI could have a size smaller than Min ROI size;
• Max : set the maximum width of a ROI (i.e. the side length of a square, centred at the seed
pixel, which inscribes the ROI) in pixel unit;
3.3.3 Classification preview Classification preview allows for displaying temporary classifications (i.e. classification previews). Classification previews are useful for testing the algorithm in a small area of the Input image (page 28), before classifying the
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 entire image which can be time consuming (see Classification output (page 34)).
Classification preview is performed according to the parameters defined in Classification algorithm (page 33).
In addition to the classification raster, an Algorithm raster (page 133) can be displayed, which is useful for assess-
ing the distance of a pixel classified as class X from the corresponding spectral signature X. In Classification previews, black pixels are distant from the corresponding spectral signature (i.e. probably a new ROI, or spectral signature, should be collected in that area) and white pixels are closer to the corresponding spectral signature (i.e.
probably the spectral signature identifies correctly those pixels).
After the creation of a new preview, old previews are placed in QGIS Layers inside a layer group named Class_temp_group (custom name can be defined in Temporary group name (page 104)) and are deleted when the QGIS session is closed.
WARNING: Classification previews are automatically deleted from disk when the QGIS session is
closed; a QGIS message (that can be ignored) could ask for the path of missing layers when opening
a previously saved project.
• : zoom the map to the extent of the last Classification preview (page 24);
• Preview: use the button to show/hide the last Classification preview (page 24) in the map;
• : activate the pointer for the creation of a Classification preview (page 24); left click the map to start
the classification process and display the classification preview; right click to start the classification process
and show the Algorithm raster (page 133) of the preview;
• : create a new Classification preview (page 24) centred at the same pixel as the previous one;
• T : change dynamically the classification preview transparency, which is useful for comparing the
classification to other layers;
• S : size of the preview in pixel unit (i.e. the side length of a square, centred at the clicked pixel);
• : remove from QGIS the classification previews that are archived in the Class_temp_group;
3.4 SCP dock
• SCP input (page 28)
– Input image (page 28)
– Training input (page 28)
– SCP news (page 29)
• Classification dock (page 29)
– ROI Signature list (page 29)
– ROI creation (page 30)
– Macroclasses (page 32)
– Classification algorithm (page 33)
– Classification output (page 34)
The SCP dock allows for the definition of inputs, the creation of ROIs (Regions Of Interest) and spectral signatures,
and the classification of an input image.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 The Input image (page 28), to be classified, can be a multi-band raster or a set of single bands defined in the Band set (page 96).
The Training input (page 28), created with SCP, stores the ROI polygons and spectral signatures used for the land cover classification of the Input image (page 28).
ROIs are polygons used for the definition of the spectral characteristics of land cover classes. SCP allows for the creation of temporary ROI polygons using a region growing algorithm or drawn manually. Using the region grow-
ing algorithm the image is segmented around a pixel seed including spectrally homogeneous pixels. Temporary ROI polygons can be saved in the Training input (page 28) along with the spectral signatures of the ROI. It is worth pointing out that classification is always based on spectral signatures.
In SCP, land cover classes (and ROIs) are defined with a system of Classes (Class ID) and Macroclasses (Macro-
class ID) (see Classes and Macroclasses (page 127)) that are used for the classification process; each Macroclass ID is related to a Macroclass Information (e.g. macroclass name) and each Class ID is related to a Class Informa-
tion (e.g. class name), but only Macroclass ID and Class ID are used for the classification process.
Training input is composed of a vector part (i.e. a shapefile) and a spectral signature part which are independent.
The attribute table of the vector contains four fields as in the following table.
Training input fields
Description Field name Field type
Macroclass ID MC_ID int
Macroclass Information MC_info string
Class ID C_ID int
Class Information C_info string Spectral signatures of classes are calculated from the ROIs and saved in the Training input (page 28). In addition,
spectral signatures can be imported from other sources (see Import signatures (page 53)).
The use of the Macroclass ID or Class ID for classifications is defined with the option Use MC ID or C ID in the Classification algorithm (page 33). It is worth highlighting that when using Macroclass ID all the spectral signatures are evaluated separately and each pixel is classified with the corresponding MC ID (i.e. there is no combination of signatures before the classification).
The classification can be performed for the entire image ( Classification output (page 34) ) or a part of it, creating a Classification preview (page 24).
The functions are described in detail in the following paragraphs, using these conventions:
= Input date
= Input text
= List
= Input number
= Optional
= Configuration stored in the active project of QGIS
= Configuration stored in QGIS registry
= Slider
= Table
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.4: SCP input
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 3.4.1 SCP input Input image This section allows for the selection of the image to be classified. Raster files must be already loaded in QGIS.
Input image can be a multi-band raster or a set of single bands defined in the Band set (page 96). If a multi-band raster is selected, raster bands are listed in the Band set (page 96).
• : open one or more raster files and add them Band set (page 96);
• : open the Band set (page 96);
• Input image : select the input image from a list of multi-spectral images loaded in QGIS; if the
Band set (page 96) is defined, then this list will contain the item << band set >>;
• : refresh layer list;
Training input The training input is a file .scp created in SCP (i.e. a zip file containing a shapefile and an xml file) used for storing ROIs and spectral signatures.
Warning: Signature list files saved with previous versions of SCP are not compatible with SPC 5;
however you can import a ROI shapefile using the tool Import shapefile (page 55).
ROIs and spectral signatures are displayed in the ROI Signature list (page 29). ROIs and spectral signatures can be imported from other sources (see Import signatures (page 53)) and exported (see Export signatures (page 57)).
ROIs are displayed in QGIS as vector file (in order to prevent data loss, you should not edit this layer using QGIS functions).
• : open a training input file; ROIs and spectral signatures are loaded in ROI Signature list (page 29); the
vector part of the training input is loaded in QGIS;
• : create an empty training input file (.scp); the vector part of the training input is loaded in QGIS;
also a backup file is created (a file .scp.backup in the same directory as the file .scp) when the training
input file is saved;
• Training input : it displays the path to the training input file;
• : open the Download images (page 37);
• : open the Tools (page 51);
• : open the Preprocessing (page 64);
• : open the Postprocessing (page 78);
• : open the Band calc (page 92);
• : open the Settings (page 102);
• : open the online user manual in a web browser;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• : open the Online help in a web browser; also, a Facebook group and a Google+ Community are
available for sharing information and asking for help about SCP;
SCP news This section displays news about the SCP and related services. News are downloaded on startup (internet connec-
tion required). It can be enabled or disabled in the settings Dock (page 104).
3.4.2 Classification dock The Classification dock is designed to manage the spectral signatures, and classify the Input image (page 28).
ROI Signature list
Fig. 3.5: ROI Signature list The ROI Signature list displays the ROI polygons and spectral signatures contained in the Training input (page 28).
If an item is a ROI polygon, double click the item to zoom to that ROI in the map. Items in the table can be highlighted with the mouse left click.
Changes in the ROI Signature list are applied to the file Training input (page 28) only when the QGIS project is saved. ROIs can be edited, deleted and merged from this table.
WARNING: In order to avoid data loss, do not edit the vector Training input using the QGIS tools.
Use only the tools of SCP for managing the Training input.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• ROI Signature list:
– S: selection checkbox; only the spectral signatures checked in this list are used for the classifica-
tion process;
– Type: type of the item:
* R = only ROI polygon;
* S = only spectral signature;
* B = both ROI and spectral signature;
– MC ID: ROI Macroclass ID [int]; it can be edited with a single click; MC Info is displayed in
Macroclasses (page 32); if the ID of a spectral signature is set 0, then pixels belonging to this
signature are labelled as unclassified;
– C ID: ROI Class ID [int]; it can be edited with a single click;
– C Info: ROI Class Information [text]; it can be edited with a single click;
– Color: C ID color; double click to select a color for the class that is used in the classification;
if the ID of a spectral signature is set 0, then pixels belonging to this signature are labelled as
unclassified;
• : delete highlighted ROIs and signatures;
• : merge highlighted spectral signatures or ROIs obtaining a new signature calculated as the average of
signature values for each band (covariance matrix is excluded);
• : calculate spectral signatures of highlighted ROIs;
• : show the ROI spectral signature in the Spectral Signature Plot (page 107); spectral signature is
calculated from the Input image (page 28);
• : open the Scatter Plot (page 112);
• : open the tab Export signatures (page 57);
• : open the tab Import signatures (page 53);
ROI creation ROI creation is complementary to the Working toolbar (page 23) and it allows for saving ROIs to the Training input (page 28) defining classes and macroclasses. A Band set (page 96) must be defined before the ROI creation,
and ROI polygons must be inside the area of the Band set.
• MC ID : ROI Macroclass ID [int]; the corresponding MC Info is loaded if already defined in
Macroclasses (page 32);
• MC Info : ROI Macroclass information [text]; style and information for macroclasses are defined
in Macroclasses (page 32);
• C ID : ROI Class ID [int];
• C Info : ROI Class information [text];
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• : delete the last saved ROI from the Training input (page 28);
• Calculate sig. : if checked, while saving a ROI, the spectral signature thereof is calculated (from
Input image (page 28) pixels under ROI polygon) and saved to Training input (page 28) (calculation time
depends on the band number of Input image (page 28));
• : save the temporary ROI to the Training input (page 28) using the defined classes and macroclasses;
ROI is displayed in the ROI Signature list (page 29);
• Display : if the ROI creation pointer is active (see Working toolbar (page 23)), the pixel value of selected
– NDVI (Normalized Difference Vegetation Index); NDVI requires the near-infrared and red bands;
– EVI (Enhanced Vegetation Index); EVI requires the blue, near-infrared and red bands converted
to reflectance; wavelengths must be defined in the Band set (page 96);
– Custom; use the custom expression defined in the following line Expression;
• Expression : set a custom expression; expression is based on the Band set; bands are defined as
bandset#b + band number (e.g. bandset#b1 for the first band of the Band set); for example NDVI for a
Landsat image would be ( bandset#b4 - bandset#b3 ) / ( bandset#b4 + bandset#b3 );
• Rapid ROI band : if checked, temporary ROI is created with region growing using only one
Input image (page 28) band (i.e.region growing is rapider); the band is defined by the Band set number; if
unchecked, ROI is the result of the intersection between ROIs calculated on every band (i.e. region growing
is slower, but ROI is spectrally homogeneous in every band);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Automatic refresh ROI: calculate automatically a new temporary ROI while Region growing parameters
in the Working toolbar (page 23) are being changed;
• Automatic plot: calculate automatically the temporary ROI spectral signature and display it in the
Spectral Signature Plot (page 107) (MC Info of this spectral signature is set tempo_ROI);
Macroclasses
Fig. 3.7: Macroclasses Macroclasses allows for the definition of Macroclass names and colors (used to display the results of Classifi-
cation preview (page 24) and Classification output (page 34)). According to Classification algorithm (page 33),
classifications performed using C ID have the colors defined for classes in the ROI Signature list (page 29); clas-
sifications performed using MC ID have the colors defined in the Macroclasses (page 32).
MC IDs are automatically added to this table when a new ROI is saved to the ROI Signature list (page 29) (if the MC ID is not already in the list). Settings are stored in Training input (page 28).
• Macroclasses :
– MC ID: Macroclass ID [int]; it can be edited with a single click;
– MC Info: Macroclass Information [text]; it can be edited with a single click;
– Color: MC ID color; double click to select a color for the class that is used in the classification;
• : add a new row to the table;
• : delete the highlighted rows from the table;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Classification style In addition, a previously saved classification style (QGIS .qml file) can be loaded and used for classification style.
• Load qml : select a .qml file overriding the colors defined for C ID or MC ID;
• : reset style to default (i.e. use the colors defined for C ID or MC ID);
Classification algorithm
Fig. 3.8: Classification algorithm The Classification algorithm includes several functions for the classification process used also during the Classi-
fication preview (page 24).
• Use MC ID C ID : if MC ID is checked, the classification is performed using the Macroclass
ID (code MC ID of the signature); if C ID is checked, the classification is performed using the Class ID
(code C ID of the signature);
• : open the Algorithm band weight (page 57) for the definition of band weights;
Algorithm Classification is performed using the selected algorithm.
• : available Classification Algorithms (page 128) are:
– Minimum Distance (page 129);
– Maximum Likelihood (page 129);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– Spectral Angle Mapping (page 130);
• Threshold : it allows for the definition of a classification threshold (applied to all the spectral signatures); fo
– for Minimum Distance, pixels are unclassified if distance is greater than threshold value;
– for Maximum Likelihood, pixels are unclassified if probability is less than threshold value (max
100);
– for Spectral Angle Mapping, pixels are unclassified if spectral angle distance is greater than
threshold value (max 90);
• : open the Signature threshold (page 59) for the definition of signature thresholds;
Land Cover Signature Classification Land Cover Signature Classification (page 131) is a classification that can be used as alternative or in combination with the Algorithm (page 33) (see LCS threshold (page 60)). Pixels belonging to two or more different classes (or macroclasses) are classified as Class overlap with raster value = -1000.
• Use LCS Algorithm only overlap: if LCS is checked, the Land Cover Signature Classification
is used; if Algorithm is checked, the selected Algorithm (page 33) is used for unclassified pixels of the Land
Cover Signature Classification; if only overlap is checked, the selected Algorithm (page 33) is used only for
class overlapping pixels of the Land Cover Signature Classification; unclassified pixels of the Land Cover
Signature Classification are left unclassified;
• : open the LCS threshold (page 60);
Classification output Classification output allows for the classification of the Input image (page 28) according to the parameters defined in Classification algorithm (page 33).
Classification raster is a file .tif (a QGIS style file .qml is saved along with the classification); also other outputs can be optionally calculated. Outputs are loaded in QGIS after the calculation.
• Apply mask : if checked, a shapefile can be selected for masking the classification output (i.e. the
area outside the shapefile is not classified);
• : reset the mask shapefile;
• Create vector : if checked, in addition to the classification raster, a classification shapefile is saved
in the same directory and with the same name as the Classification output; conversion to vector can also be
performed at a later time (see Classification to vector (page 83));
• Classification report : if checked, a report about the land cover classification is calculated and
saved as a .csv file in the same directory and with the same name (with the suffix _report) as the Classi-
fication output; report can also be performed at a later time (see Classification report (page 81));
• Save algorithm files : if checked, the Algorithm raster (page 133) is saved, in addition
to the classification raster, in the same directory as the Classification output; a raster for each spectral
signature used as input (with the suffix _sig_MC ID_C ID) and a general algorithm raster (with the
suffix _alg_raster) are created;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.9: Classification output
• : choose the output destination and start the image classification;
3.5 Main Interface Window
• Download images (page 37)
– Landsat download (page 37)
– Sentinel-2 download (page 41)
– ASTER download (page 46)
– MODIS download (page 48)
• Tools (page 51)
– Multiple ROI Creation (page 52)
– Import signatures (page 53)
– Export signatures (page 57)
– Algorithm band weight (page 57)
– Signature threshold (page 59)
– LCS threshold (page 60)
– RGB list (page 62)
• Preprocessing (page 64)
– Landsat (page 64)
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– Sentinel-2 (page 68)
– ASTER (page 69)
– MODIS (page 70)
– Clip multiple rasters (page 72)
– Split raster bands (page 74)
– Stack raster bands (page 75)
– PCA (page 76)
– Vector to raster (page 77)
• Postprocessing (page 78)
– Accuracy (page 78)
– Land cover change (page 80)
– Classification report (page 81)
– Cross classification (page 82)
– Classification to vector (page 83)
– Reclassification (page 86)
– Edit raster (page 86)
– Classification sieve (page 88)
– Classification erosion (page 90)
– Classification dilation (page 91)
• Band calc (page 92)
– Band list (page 92)
– Expression (page 93)
– Index calculation (page 94)
– Decision rules (page 94)
– Output raster (page 95)
• Band set (page 96)
– Band list (page 97)
– Band set definition (page 97)
– Band set tools (page 98)
• Batch (page 98)
– Batch (page 98)
– Run (page 102)
• Settings (page 102)
– Interface (page 102)
– Processing (page 104)
– Debug (page 106)
The Main Interface Window is composed of several tabs and subtabs. The functions are described in detail in the following paragraphs, using these conventions:
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
= Input date
= Input text
= List
= Input number
= Optional
= Configuration stored in the active project of QGIS
= Configuration stored in QGIS registry
= Slider
= Table 3.5.1 Download images The tab Download images includes the tools for searching and downloading free remote sensing images. An internet connection is required.
Landsat download This tab allows for searching and downloading the whole archive of Landsat Satellite (page 120) images (from 1 MSS to 8 OLI), acquired from the 80s to present days. Search is performed through the CMR Search API developed by NASA.
Landsat images are freely available through the services: EarthExplorer , Google Earth Engine , and the Amazon Web Services (AWS) (for Landsat 8). This tool attempts to download images first from Amazon Web Services and Google Earth Engine ; only if images are not available, the download is performed through the service EarthExplorer in order to prevent the server from becoming saturated.
Images are downloaded as compressed archives (this tool allows for the download of single bands for Landsat 8 images provided by the Amazon Web Services). Also, automatic conversion to reflectance of downloaded bands is available.
Login https://ers.cr.usgs.gov/
USGS EROS credentials (https://ers.cr.usgs.gov) are required for downloads from EarthExplorer . Login using your USGS EROS credentials or register for free at https://ers.cr.usgs.gov/register .
• User : enter the user name;
• Password : enter the password;
• remember: remember user name and password in QGIS;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.10: Landsat download
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Search area Define the search area by entering the coordinates (longitude and latitude) of an Upper Left (UL) point and Lower Right (LR) point, or interactively drawing an area in the map.
The definition of a search area is required before searching the images.
• UL X (Lon) : set the UL longitude;
• UL Y (Lat) : set the UL latitude;
• LR X (Lon) : set the LR longitude;
• LR Y (Lat) : set the LR latitude;
• Show: show or hide the search area drawn in the map;
• : define a search area by drawing a rectangle in the map; left click to set the UL point and right click
to set the LR point; the area is displayed in the map;
Search Define the search settings such as date of acquisition, maximum cloud cover, or specify Landsat satellites.
• Satellites : set the Landsat satellites;
• Date from : set the starting date of acquisition;
• to : set the ending date of acquisition;
• Max cloud cover (%) : maximum cloud cover in the image;
• Results : maximum number of images returned by the search;
• Filter : set a filter such as the Image ID of Landsat images (e.g. LC81910312015006LGN00);
it is possible to enter multiple Image IDs separated by comma or semicolon (e.g.
LC81910312015006LGN00, LC81910312013224LGN00 ); filtered images must be inside
the search area;
• Find : find the images in the search area; results are displayed inside the table in Landsat images
(page 39); results are added to previous results;
Tip: Search results (and the number thereof) depend on the defined area extent and the range of
dates. In order to get more results, perform multiple searches defining smaller area extent and
narrow acquisition dates (from and to).
Landsat images
• Image list: found images are displayed in this table, which includes the following fields;
– ImageID: the Landsat Image ID;
– AcquisitionDate: date of acquisition of Landsat image;
– CloudCover: percentage of cloud cover in the image;
– Path: WRS path of the image;
– Row: WRS row of the image;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– min_lat: minimum latitude of the image;
– min_lon: minimum longitude of the image;
– max_lat: maximum latitude of the image;
– max_lon: maximum longitude of the image;
– USGScollection: USGS collection code of the image;
– Preview: URL of the image preview;
– collection: collection code of the image;
• : display preview of highlighted images in the map; preview is roughly georeferenced on the fly;
• : remove highlighted images from the list;
• : remove all images from the list;
Download options
Fig. 3.11: Download options Landsat 8 bands This tab allows for the selection of single bands (only for Landsat 8 images provided by the Amazon Web Services).
• Band X: select bands for download;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• : select or deselect all bands;
Download Download the Landsat images in the Landsat images (page 39). During the download it is recommended not to interact with QGIS.
Download is performed according to image availability from the services EarthExplorer , Google Earth Engine , or the Amazon Web Services (AWS) . If the image is not available for download it is possible to check the availability thereof on http://earthexplorer.usgs.gov/ .
• Only if preview in Layers: if checked, download only those images listed in Landsat images (page 39)
which are also listed in the QGIS layer panel;
• Preprocess images: if checked, bands are automatically converted after the download, according to the
settings defined in Landsat (page 64);
• Load bands in QGIS: if checked, bands are loaded in QGIS after the download;
• : export the download links to a text file;
• : start the download process of all the images listed in Landsat images (page 39);
Sentinel-2 download Sentinel-2 is a European satellite launched in 2015, developed in the frame of Copernicus land monitoring ser-
vices, which acquires 13 spectral bands (see Sentinel-2 Satellite (page 122)). This tab allows for searching and downloading the free Sentinel-2 images (Level-1C) from the Sentinels Scientific Data Hub (using the Data Hub API ). Images are mainly downloaded from the Amazon S3 AWS if available.
Sentinel-2 satellite has a swath width of 290km. Sentinel-2 Level-1C images are delivered in granules (also called tiles) with a side of 100km in UTM/WGS84 projection. This tool allows for the selection and download of granules and bands.
Tip: In case of errors please see Error [50] ‘Internet error’. Unable to download Sentinel-2 images.
Why? (page 215) and Error [56] ‘SSL connection error’. Unable to download Sentinel-2 images.
Why? (page 215).
Login Sentinels In order to access to Sentinel data a free registration is required at https://scihub.copernicus.eu/userguide/
1SelfRegistration (other services may require different registrations). After the registration, enter the user name and password for accessing data.
• Service : enter the service URL (default is https://scihub.copernicus.eu/apihub); other mirror
services that share the same infrastructure can be used (such as https://scihub.copernicus.eu/dhus , https:
//finhub.nsdc.fmi.fi , https://data.sentinel.zamg.ac.at);
• : reset the default service https://scihub.copernicus.eu/s2);
• User : enter the user name;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.12: Sentinel-2 download
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Password : enter the password;
• remember: remember user name and password in QGIS;
Search area Define the search area by entering the coordinates (longitude and latitude) of an Upper Left (UL) point and Lower Right (LR) point, or interactively drawing an area in the map. The definition of a search area is required before searching the images.
• UL X (Lon) : set the UL longitude;
• UL Y (Lat) : set the UL latitude;
• LR X (Lon) : set the LR longitude;
• LR Y (Lat) : set the LR latitude;
• Show: show or hide the search area drawn in the map;
• : define a search area by drawing a rectangle in the map; left click to set the UL point and right click
to set the LR point; the area is displayed in the map;
Search Define search settings such as the date of acquisition or search for specific Sentinel images using the Image ID or name.
• Date from : set the starting date of acquisition;
• to : set the ending date of acquisition;
• Max cloud cover (%) : maximum cloud cover in the image;
• Results : maximum number of images returned by the search;
• Filter : set a filter such as the Image Name of Sentinel images (e.g.
S2A_OPER_PRD_MSIL1C_PDMC_20160419T190217_R022_V20160419T101026);
• Find : find the images in the search area; results are displayed inside the table in Sentinel images
(page 43); results are added to previous results;
Tip: Search results (and the number thereof) depend on the defined area extent and the range of
dates. In order to get more results, perform multiple searches defining smaller area extent and
narrow acquisition dates (from and to).
Sentinel images
• Image list: found images are displayed in this table, which includes the following fields;
– ImageName: the Sentinel Image Name;
– Granule: the single granule name;
– AcquisitionDate: date of acquisition of Sentinel image;
– Zone: tile zone according to the US-MGRS naming convention;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– CloudCover: percentage of cloud cover in the image;
– min_lat: minimum latitude of the image;
– min_lon: minimum longitude of the image;
– max_lat: maximum latitude of the image;
– max_lon: maximum longitude of the image;
– Size: the size of the image (unused);
– Preview: URL of the image overview;
– GranulePreview: URL of the granule preview; if available, preview is downloaded from the
Amazon Web Services ;
– ImageID: the Sentinel Image ID;
• : display preview of highlighted granules in the map;
• : display overview of highlighted images in the map; overview is roughly georeferenced on the fly;
overviews could not be available when using mirror services;
• : remove highlighted images from the list;
• : remove all images from the list;
Tip: download this zip file containing the shapefile of Sentinel-2 granules for identifying the
zone; load this shapefile in QGIS, select the granules in your search area and open the attribute
table to see the zone name.
Download options This tab allows for the selection of single bands.
• Band X: select bands for download;
• Ancillary data: if checked, the metadata files (a .xml file whose name contains MTD_SAFL1C and
a .xml file whose name contains MTD_L1C) and the cloud mask file (a .gml file whose name contains
MSK_CLOUDS) are downloaded;
• : select or deselect all bands;
Download Download the Sentinel-2 images in the Sentinel images (page 43). Bands selected in Download options (page 44)
are downloaded.
During the download it is recommended not to interact with QGIS.
• Only if preview in Layers: if checked, download only those images listed in Sentinel images (page 43)
which are also listed in the QGIS layer panel;
• Preprocess images: if checked, bands are automatically converted after the download, according to the
settings defined in Sentinel-2 (page 68);
• Load bands in QGIS: if checked, bands are loaded in QGIS after the download;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.13: Download options
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• : export the download links to a text file;
• : start the download process of all the images listed in Sentinel images (page 43);
ASTER download
Fig. 3.14: ASTER download This tab allows for searching and downloading the whole archive of free images L1T acquired by ASTER Satellite
(page 122) since 2000. Search is performed through the CMR Search API developed by NASA. The ASTER L1T data products are retrieved from the online Data Pool, courtesy of the NASA Land Processes Distributed Active Archive Center (LP DAAC), USGS/Earth Resources Observation and Science (EROS) Center, Sioux Falls, South Dakota, https://lpdaac.usgs.gov/data_access/data_pool.
Also, automatic conversion to reflectance of downloaded bands is available.
Login https://urs.earthdata.nasa.gov EOSDIS Earthdata credentials (https://urs.earthdata.nasa.gov ) are required for download. Login using your EOS-
DIS Earthdata credentials or register for free at https://urs.earthdata.nasa.gov/users/new .
Warning: Before downloading ASTER images, you must approve LP DAAC Data Pool clicking the
following https://urs.earthdata.nasa.gov/approve_app?client_id=ijpRZvb9qeKCK5ctsn75Tg
• User : enter the user name;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Password : enter the password;
• remember: remember user name and password in QGIS;
Search area Define the search area by entering the coordinates (longitude and latitude) of an Upper Left (UL) point and Lower Right (LR) point, or interactively drawing an area in the map.
The definition of a search area is required before searching the images.
• UL X (Lon) : set the UL longitude;
• UL Y (Lat) : set the UL latitude;
• LR X (Lon) : set the LR longitude;
• LR Y (Lat) : set the LR latitude;
• Show: show or hide the search area drawn in the map;
• : define a search area by drawing a rectangle in the map; left click to set the UL point and right click
to set the LR point; the area is displayed in the map;
Search Define the search settings such as date of acquisition, maximum cloud cover, or specify ASTER satellites.
• Satellites : set the ASTER satellites (unused);
• Date from : set the starting date of acquisition;
• to : set the ending date of acquisition;
• Max cloud cover (%) : maximum cloud cover in the image;
• Results : maximum number of images returned by the search;
• Filter : set a filter such as the Image ID of ASTER images; it is possible to enter multiple Image IDs
separated by comma or semicolon; filtered images must be inside the search area;
• Find : find the images in the search area; results are displayed inside the table in ASTER images
(page 47); results are added to previous results;
Tip: Search results (and the number thereof) depend on the defined area extent and the range of
dates. In order to get more results, perform multiple searches defining smaller area extent and
narrow acquisition dates (from and to).
ASTER images
• Image list: found images are displayed in this table, which includes the following fields;
– ImageID: the ASTER Image ID;
– AcquisitionDate: date of acquisition of ASTER image;
– CloudCover: percentage of cloud cover in the image;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– ImageDisaplyID: the ASTER Image ID;
– DayNightFlag: flag for acquisition during day or night;
– min_lat: minimum latitude of the image;
– min_lon: minimum longitude of the image;
– max_lat: maximum latitude of the image;
– max_lon: maximum longitude of the image;
– Service: download service of the image;
– Preview: URL of the image preview;
– collection: collection code of the image;
• : display preview of highlighted images in the map; preview is roughly georeferenced on the fly;
• : remove highlighted images from the list;
• : remove all images from the list;
Download Download the ASTER images in the ASTER images (page 47). During the download it is recommended not to interact with QGIS.
• Only if preview in Layers: if checked, download only those images listed in ASTER images (page 47)
which are also listed in the QGIS layer panel;
• Preprocess images: if checked, bands are automatically converted after the download, according to the
settings defined in ASTER (page 69);
• Load bands in QGIS: if checked, bands are loaded in QGIS after the download;
• : export the download links to a text file;
• : start the download process of all the images listed in ASTER images (page 47);
MODIS download This tab allows for searching and downloading the archive of free MODIS Products (page 123) acquired since 2000 (in particular MOD09GQ, MYD09GQ, MOD09GA, MYD09GA, MOD09Q1, MYD09Q1, MOD09A1,
MYD09A1). Search is performed through the CMR Search API developed by NASA. MODIS products are retrieved from the online Data Pool, courtesy of the NASA Land Processes Distributed Active Archive Cen-
ter (LP DAAC), USGS/Earth Resources Observation and Science (EROS) Center, Sioux Falls, South Dakota,
https://lpdaac.usgs.gov/data_access/data_pool.
Also, automatic reprojection of downloaded bands is available.
Login https://urs.earthdata.nasa.gov EOSDIS Earthdata credentials (https://urs.earthdata.nasa.gov ) are required for download. Login using your EOS-
DIS Earthdata credentials or register for free at https://urs.earthdata.nasa.gov/users/new .
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.15: MODIS download
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Warning: Before downloading MODIS images, you must approve LP DAAC Data Pool clicking the
following https://urs.earthdata.nasa.gov/approve_app?client_id=ijpRZvb9qeKCK5ctsn75Tg
• User : enter the user name;
• Password : enter the password;
• remember: remember user name and password in QGIS;
Search area Define the search area by entering the coordinates (longitude and latitude) of an Upper Left (UL) point and Lower Right (LR) point, or interactively drawing an area in the map.
The definition of a search area is required before searching the images.
• UL X (Lon) : set the UL longitude;
• UL Y (Lat) : set the UL latitude;
• LR X (Lon) : set the LR longitude;
• LR Y (Lat) : set the LR latitude;
• Show: show or hide the search area drawn in the map;
• : define a search area by drawing a rectangle in the map; left click to set the UL point and right click
to set the LR point; the area is displayed in the map;
Search Define the search settings such as date of acquisition, maximum cloud cover, or specify MODIS product.
• Products : set the MODIS products;
• Date from : set the starting date of acquisition;
• to : set the ending date of acquisition;
• Max cloud cover (%) : maximum cloud cover in the image (unused);
• Results : maximum number of images returned by the search;
• Filter : set a filter such as the Image ID of MODIS images; it is possible to enter multiple Image IDs
separated by comma or semicolon; filtered images must be inside the search area;
• Find : find the images in the search area; results are displayed inside the table in MODIS images
(page 51); results are added to previous results;
Tip: Search results (and the number thereof) depend on the defined area extent and the range of
dates. In order to get more results, perform multiple searches defining smaller area extent and
narrow acquisition dates (from and to).
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 MODIS images
• Image list: found images are displayed in this table, which includes the following fields;
– ImageID: the MODIS Image ID;
– AcquisitionDate: date of acquisition of MODIS image;
– CloudCover: percentage of cloud cover in the image;
– ImageDisaplyID: the MODIS Image ID;
– DayNightFlag: flag for acquisition during day or night;
– min_lat: minimum latitude of the image;
– min_lon: minimum longitude of the image;
– max_lat: maximum latitude of the image;
– max_lon: maximum longitude of the image;
– Service: download service of the image;
– Preview: URL of the image preview;
– collection: collection code of the image;
• : display preview of highlighted images in the map; preview is roughly georeferenced on the fly;
• : remove highlighted images from the list;
• : remove all images from the list;
Download Download the MODIS images in the MODIS images (page 51). During the download it is recommended not to interact with QGIS.
• Only if preview in Layers: if checked, download only those images listed in MODIS images (page 51)
which are also listed in the QGIS layer panel;
• Preprocess images: if checked, bands are automatically converted after the download, according to the
settings defined in MODIS (page 70);
• Load bands in QGIS: if checked, bands are loaded in QGIS after the download;
• : export the download links to a text file;
• : start the download process of all the images listed in MODIS images (page 51);
3.5.2 Tools The tab Tools includes several tools for manipulating ROIs and spectral signatures.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Multiple ROI Creation
Fig. 3.16: Multiple ROI Creation This tab allows for the automatic creation of ROIs, useful for the rapid classification of multi-temporal images, or for accuracy assessment. Given a list of point coordinates and ROI options, this tool performs the region growing of ROIs. Created ROIs are automatically saved to the Training input (page 28).
Create random points
• Number of points : set a number of points that will be created when Create points is clicked;
• inside grid : if checked, the input image area is divided in cells where the size thereof is defined in
the combobox (image unit, usually meters); points defined in Number of random points are created
randomly within each cell;
• min distance : if checked, random points have a minimum distance defined in the combobox
(image unit, usually meters); setting a minimum distance can result in fewer points than the number defined
in Number of points;
• Create points : create random points inside the input image area;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Point coordinates and ROI definition
• Point coordinates and ROI definition: table containing the following fields;
– X : point X coordinate (float);
– Y : point Y coordinate (float);
– MC ID: ROI Macroclass ID (int);
– MC Info: ROI Macroclass information (text);
– C ID: ROI Class ID (int);
– C Info: ROI Class information (text);
– Min : the minimum area of a ROI (in pixel unit);
– Max : the maximum width of a ROI (in pixel unit);
– Dist : the interval which defines the maximum spectral distance between the seed pixel and the
surrounding pixels (in radiometry unit);
– Rapid ROI band : if a band number is defined, ROI is created only using the selected band,
similarly to Rapid ROI band in ROI creation (page 30) ;
• : add a new row to the table; all the table fields must be filled for the ROI creation;
• : delete the highlighted rows from the table;
• : import a point list from text file to the table; every line of the text file must contain values separated
by tabs of X, Y, MC ID, MC Info, Class ID, C Info, Min, Max, Dist, and optionally the Rapid
ROI band;
• : export the point list to text file;
Run
• Calculate sig.: if checked, the spectral signature is calculated while the ROI is saved to Training input
(page 28);
• : start the ROI creation process for all the points and save ROIs to the Training input (page 28);
Import signatures The tab Import signatures allows for importing spectral signatures from various sources.
Import library file This tool allows for importing spectral signatures from various sources: a previously saved Training input
(page 28) (.scp file); a USGS Spectral Library (.asc file); a previously exported CSV file. In case of USGS Spectral Library, the library is automatically sampled according to the image band wavelengths defined in the Band set (page 96), and added to the ROI Signature list (page 29);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.17: Import library file
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Select a file : open a file to be imported in the Training input (page 28);
Import shapefile
Fig. 3.18: Import shapefile This tool allows for importing a shapefile, selecting the corresponding fields of the Training input (page 28).
• Select a shapefile : open a shapefile;
• MC ID field : select the shapefile field corresponding to MC ID;
• MC Info field : select the shapefile field corresponding to MC Info;
• C ID field : select the shapefile field corresponding to C ID;
• C Info field : select the shapefile field corresponding to C Info;
• Calculate sig.: if checked, the spectral signature is calculated while the ROI is saved to Training input
(page 28);
• Import shapefile : import all the shapefile polygons as ROIs in the Training input (page 28);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Download USGS Spectral Library
Fig. 3.19: Download USGS Spectral Library The tab Download USGS Spectral Library allows for the download of the USGS spectral library (Clark, R.N.,
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2007, USGS digital spectral library splib06a:
U.S. Geological Survey, Digital Data Series 231).
The libraries are grouped in chapters including Minerals, Mixtures, Coatings, Volatiles, Man-Made, Plants, Vege-
tation Communities, Mixtures with Vegetation, and Microorganisms. An internet connection is required.
• Select a chapter : select one of the library chapters; after the selection, chapter libraries are shown in
Select a library;
• Select a library : select one of the libraries; the library description is displayed in the frame Library
description;
• Import spectral library : download the library and add the sampled spectral signature to the ROI
Signature list (page 29) using the parameters defined in ROI creation (page 30); the library is automatically
sampled according to the image band wavelengths defined in the Band set (page 96), and added to the ROI
Signature list (page 29);
Tip: Spectral libraries downloaded from the USGS Spectral Library can be used with
Minimum Distance or Spectral Angle Mapping algorithms, but not Maximum Likelihood be-
cause this algorithm needs the covariance matrix that is not included in the spectral libraries.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Export signatures
Fig. 3.20: Export signatures This tool allows for exporting the signatures highlighted in the ROI Signature list (page 29).
• Export as SCP file : create a new .scp file and export highlighted ROIs and spectral signatures as SCP
file (* .scp);
• Export as shapefile : export highlighted ROIs (spectral signature data excluded) as a new shapefile (*
.shp);
• Export as CSV file : open a directory, and export highlighted spectral signatures as individual CSV files
(* .csv) separated by semicolon ( ; );
Algorithm band weight This tab allows for the definition of band weights that are useful for improving the spectral separability of materials at certain wavelengths (bands). During the classification process, band values and spectral signature values are multiplied by the corresponding band weights, thus modifying the spectral distances.
Band weight
• Band weight: table containing the following fields;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.21: Algorithm band weight
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– Band number : number of the band in the Band set;
– Band name : name of the band;
– Weight : weight of the band; this value can be edited;
Automatic weight
• : reset all band weights to 1;
• Set weight : set the defined value as weight for all the highlighted bands in the table;
Signature threshold
Fig. 3.22: Signature threshold This tab allows for the definition of a classification threshold for each spectral signature. All the signatures con-
tained in the Training input (page 28) are listed. This is useful for improving the classification results, especially when spectral signatures are similar. Thresholds of signatures are saved in the Training input (page 28).
If threshold is 0 then no threshold is applied. Depending on the selected Classification algorithm (page 33) the threshold value is evaluated differently:
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• for Minimum Distance, pixels are unclassified if distance is greater than threshold value;
• for Maximum Likelihood, pixels are unclassified if probability is less than threshold value (max 100);
• for Spectral Angle Mapping, pixels are unclassified if spectral angle distance is greater than threshold value
(max 90).
Signature threshold
• Signature threshold: table containing the following fields;
– MC ID: signature Macroclass ID;
– MC Info: signature Macroclass Information;
– C ID: signature Class ID;
– C Info: signature Class Information;
– MD Threshold: Minimum Distance threshold; this value can be edited;
– ML Threshold: Maximum Likelihood threshold; this value can be edited;
– SAM Threshold: Spectral Angle Mapping threshold; this value can be edited;
• : reset all signatures thresholds to 0 (i.e. no threshold used);
Automatic thresholds
• Set threshold : set the defined value as threshold for all the highlighted signatures in the table;
• Set threshold = 𝜎 * : for all the highlighted signatures, set an automatic threshold calculated as
the distance (or angle) between mean signature and (mean signature + (𝜎 * v)), where 𝜎 is the standard
deviation and v is the defined value; currently works for Minimum Distance and Spectral Angle Mapping;
LCS threshold This tab allows for setting the signature thresholds used by Land Cover Signature Classification (page 131). All the signatures contained in the Training input (page 28) are listed; also, signature thresholds are saved in the Training input (page 28).
Overlapping signatures (belonging to different classes or macroclasses) are highlighted in orange in the table LC Signature threshold; the overlapping check is performed considering MC ID or C ID according to the setting Use
MC ID C ID in Classification algorithm (page 33). Overlapping signatures sharing the same ID are not highlighted.
LC Signature threshold
• LC Signature threshold: table containing the following fields;
– MC ID: signature Macroclass ID;
– MC Info: signature Macroclass Information;
– C ID: signature Class ID;
– C Info: signature Class Information;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.23: LCS threshold
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– Color [overlap MC_ID-C_ID]: signature color; also, the combination MC ID-C ID is displayed
in case of overlap with other signatures (see Land Cover Signature Classification (page 131));
– Min B X: minimum value of band X; this value can be edited;
– Max B X: maximum value of band X; this value can be edited;
• : show the ROI spectral signature in the Spectral Signature Plot (page 107); spectral signature is
calculated from the Input image (page 28);
Automatic thresholds Set thresholds automatically for highlighted signatures in the table LC Signature threshold; if no signature is highlighted, then the threshold is applied to all the signatures.
• Min Max : set the threshold based on the minimum and maximum of each band;
• 𝜎* : set an automatic threshold calculated as (band value + (𝜎 * v)), where 𝜎 is the standard
deviation of each band and v is the defined value;
• From ROI : set the threshold using the temporary ROI pixel values, according to the following checkboxes:
– +: if checked, signature threshold is extended to include pixel signature;
– –: if checked, signature threshold is reduced to exclude pixel signature;
• From pixel : set the threshold by clicking on a pixel, according to the following checkboxes:
– +: if checked, signature threshold is extended to include pixel signature;
– –: if checked, signature threshold is reduced to exclude pixel signature;
RGB list This tab allows for managing the RGB Color Composite (page 123) used in the list RGB= of the Image control
(page 23).
RGB list
• RGB list: table containing the following fields;
– RGB: RGB combination; this field can be manually edited;
• : move highlighted RGB combination upward;
• : move highlighted RGB combination downward;
• : automatically sort RGB combinations by name;
• : add a row to the table;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.24: RGB list
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• : remove highlighted rows from the table;
• : clear all RGB combinations from RGB list;
• : export the RGB list to a file (i.e. .csv);
• : import a previously saved RGB list from file (i.e. .csv);
Automatic RGB
• Band combinations : add the combinations of all bands (i.e. permutation) to the RGB list (page 62)
(e.g. 1-2-3, 1-2-4, . . . , 3-2-1);
3.5.3 Preprocessing The tab Preprocessing provides several tools for data manipulation which are useful before the actual clas-
sification process.
Landsat This tab allows for the conversion of Landsat 1, 2, and 3 MSS and Landsat 4, 5, 7, and 8 images from DN
(i.e. Digital Numbers) to the physical measure of Top Of Atmosphere reflectance (TOA), or the application of a simple atmospheric correction using the DOS1 method (Dark Object Subtraction 1), which is an image-based technique (for more information about the Landsat conversion to TOA and DOS1 correction, see Image conversion to reflectance (page 136)). Pan-sharpening is also available; for more information read Pan-sharpening (page 125).
Once the input is selected, available bands are listed in the metadata table.
Landsat conversion to TOA reflectance and brightness temperature
• Directory containing Landsat bands : open a directory containing Landsat bands; names of Land-
sat bands must end with the corresponding number; if the metadata file is included in this directory then
Metadata (page 66) are automatically filled;
• Select MTL file : if the metadata file is not included in the Directory containing Landsat bands,
select the path of the metadata file in order to fill the Metadata (page 66) automatically;
• Brightness temperature in Celsius: if checked, convert brightness temperature to Celsius (if a Landsat
thermal band is listed in Metadata (page 66)); if unchecked temperature is in Kelvin;
• Apply DOS1 atmospheric correction: if checked, the DOS1 Correction (page 137) is applied to all the
bands (thermal bands excluded);
• Use NoData value (image has black border) : if checked, pixels having NoData value are not
counted during conversion and the DOS1 calculation of DNmin; it is useful when image has a black border
(usually pixel value = 0);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.25: Landsat
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Perform pan-sharpening: if checked, a Brovey Transform is applied for the Pan-sharpening (page 125)
of Landsat bands;
• Create Band set and use Band set tools: if checked, the Band set is created after the conversion; also,
the Band set is processed according to the tools checked in the Band set (page 96);
Metadata All the bands found in the Directory containing Landsat bands are listed in the table Metadata. If the Landsat metadata file (a .txt or .met file with the suffix MTL) is provided, then Metadata are automatically filled. For information about Metadata fields read this page and this one .
• Satellite : satellite name (e.g. Landsat8);
• Date : date acquired (e.g. 2013-04-15);
• Sun elevation : Sun elevation in degrees;
• Earth sun distance : Earth Sun distance in astronomical units (automatically calculated if Date is filled;
• : remove highlighted bands from the table Metadata;
• Metadata: table containing the following fields;
– Band: band name;
– RADIANCE_MULT: multiplicative rescaling factor;
– RADIANCE_ADD: additive rescaling factor;
– REFLECTANCE_MULT: multiplicative rescaling factor;
– REFLECTANCE_ADD: additive rescaling factor;
– RADIANCE_MAXIMUM: radiance maximum;
– REFLECTANCE_MAXIMUM: reflectance maximum;
– K1_CONSTANT: thermal conversion constant;
– K2_CONSTANT: thermal conversion constant;
– LMAX: spectral radiance that is scaled to QCALMAX;
– LMIN: spectral radiance that is scaled to QCALMIN;
– QCALMAX: minimum quantized calibrated pixel value;
– QCALMIN: maximum quantized calibrated pixel value;
Run
• : select an output directory and start the conversion process; only bands listed in the table Metadata
are converted; converted bands are saved in the output directory with the prefix RT_ and automatically
loaded in QGIS;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.26: Sentinel-2
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Sentinel-2 This tab allows for the conversion of Sentinel-2 images to the physical measure of Top Of Atmosphere reflectance
(TOA), or the application of a simple atmospheric correction using the DOS1 method (Dark Object Subtraction 1), which is an image-based technique (for more information about conversion to TOA and DOS1 correction, see Image conversion to reflectance (page 136)).
Once the input is selected, available bands are listed in the metadata table.
Sentinel-2 conversion
• Directory containing Sentinel-2 bands : open a directory containing Sentinel-2 bands; names of
Sentinel-2 bands must end with the corresponding number; if the metadata file is included in this direc-
tory then Metadata (page 68) are automatically filled;
• Select metadata file : select the metadata file which is a .xml file whose name contains
MTD_MSIL1C);
• Apply DOS1 atmospheric correction: if checked, the DOS1 Correction (page 137) is applied to all the
bands;
• Use NoData value (image has black border) : if checked, pixels having NoData value are not
counted during conversion and the DOS1 calculation of DNmin; it is useful when image has a black border
(usually pixel value = 0);
• Create Band set and use Band set tools: if checked, the Band set is created after the conversion; also,
the Band set is processed according to the tools checked in the Band set (page 96);
Metadata All the bands found in the Directory containing Sentinel-2 bands are listed in the table Metadata. If the Sentinel-2 metadata file (a .xml file whose name contains MTD_MSIL1C) is provided, then Metadata are automatically filled.
For information about Metadata fields read this informative page .
• Satellite : satellite name (e.g. Sentinel-2A);
• : remove highlighted bands from the table Metadata;
• Metadata: table containing the following fields;
– Band: band name;
– Quantification value: value for conversion to TOA reflectance;
– Solar irradiance: solar irradiance of band;
Run
• : select an output directory and start the conversion process; only bands listed in the table Metadata
are converted; converted bands are saved in the output directory with the prefix RT_ and automatically
loaded in QGIS;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 ASTER
Fig. 3.27: ASTER This tab allows for the conversion of ASTER L1T images to the physical measure of Top Of Atmosphere re-
flectance (TOA), or the application of a simple atmospheric correction using the DOS1 method (Dark Object Subtraction 1), which is an image-based technique (for more information about conversion to TOA and DOS1 correction, see Image conversion to reflectance (page 136)).
Once the input is selected, available bands are listed in the metadata table.
ASTER conversion
• Select file ASTER L1T : select an ASTER image (file .hdf);
• Apply DOS1 atmospheric correction: if checked, the DOS1 Correction (page 137) is applied to all the
bands;
• Use NoData value (image has black border) : if checked, pixels having NoData value are not
counted during conversion and the DOS1 calculation of DNmin; it is useful when image has a black border
(usually pixel value = 0);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Create Band set and use Band set tools: if checked, the Band set is created after the conversion; also,
the Band set is processed according to the tools checked in the Band set (page 96);
Metadata All the bands found in the Select file ASTER L1T are listed in the table Metadata. For information about Metadata fields visit the ASTER page .
• Date : date acquired (e.g. 20130415);
• Sun elevation : Sun elevation in degrees;
• Earth sun distance : Earth Sun distance in astronomical units (automatically calculated if Date is filled;
• UTM zone : UTM zone code of the image;
• UPPERLEFTM : coordinates of the upper left corner of the image;
• : remove highlighted bands from the table Metadata;
• Metadata: table containing the following fields;
– Band: band name;
– UnitConversionCoeff : value for radiance conversion;
– PixelSize: solar irradiance of band;
Run
• : select an output directory and start the conversion process; only bands listed in the table Metadata
are converted; converted bands are saved in the output directory with the prefix RT_ and automatically
loaded in QGIS;
MODIS This tab allows for the conversion of MODIS images to .tif format, and the reprojection to WGS 84.
Once the input is selected, available bands are listed in the metadata table.
MODIS conversion
• Select file MODIS : select a MODIS image (file .hdf);
• Reproject to WGS 84: if checked, reproject bands to WGS 84, required for use in SCP;
• Use NoData value (image has black border) : if checked, pixels having NoData value are not
counted during conversion and the DOS1 calculation of DNmin; it is useful when image has a black border
(usually pixel value = 0);
• Create Band set and use Band set tools: if checked, the Band set is created after the conversion; also,
the Band set is processed according to the tools checked in the Band set (page 96);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.28: MODIS
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Metadata
All the bands found in the Select file MODIS are listed in the table Metadata. For information about Metadata fields visit the MODIS page .
• ID : ID of the image;
• : remove highlighted bands from the table Metadata;
• Metadata: table containing the following fields;
– Band: band name;
– UnitConversionCoeff : value for conversion;
Run
• : select an output directory and start the conversion process; only bands listed in the table Metadata
are converted; converted bands are saved in the output directory with the prefix RT_ and automatically
loaded in QGIS;
Clip multiple rasters This tab allows for cutting several image bands at once, using a rectangle defined with point coordinates or a boundary defined with a shapefile.
Raster list
• : refresh layer list;
• : select all the rasters;
Clip coordinates Set the Upper Left (UL) and Lower Right (LR) point coordinates of the rectangle used for clipping; it is possible to enter the coordinates manually. Alternatively use a shapefile.
• UL X : set the UL X coordinate;
• UL Y : set the UL Y coordinate;
• LR X : set the LR X coordinate;
• LR Y : set the LR Y coordinate;
• Show: show or hide the clip area drawn in the map;
• : define a clip area by drawing a rectangle in the map; left click to set the UL point and right click to
set the LR point; the area is displayed in the map;
• Use shapefile for clipping : if checked, use the selected shapefile (already loaded in QGIS) for
clipping; UL and LR coordinates are ignored;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.29: Clip multiple rasters
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Use temporary ROI for clipping: if checked, use a temporary ROI (see ROI creation (page 30)) for
clipping; UL and LR coordinates are ignored;
• : refresh layer list;
• NoData value : if checked, set the value for NoData pixels (e.g. pixels outside the clipped area);
• Output name prefix : set the prefix for output file names (default is clip);
Run
• : choose an output destination and clip selected rasters; only rasters selected in the Raster list (page 72)
are clipped and automatically loaded in QGIS;
Split raster bands
Fig. 3.30: Split raster bands Split a multiband raster to single bands.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Raster input
• Select a multiband raster : select a multiband raster already loaded in QGIS;
• : refresh layer list;
• Output name prefix : set the prefix for output file names (default is split);
Run
• : choose the output destination and split selected raster; output bands are automatically loaded in
QGIS;
Stack raster bands
Fig. 3.31: Stack raster bands Stack raster bands into a single file.
Raster list
• : refresh layer list;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• : select all the rasters;
Run
• : choose the output destination and stack selected rasters; output is automatically loaded in QGIS;
PCA This tab allows for the PCA (Principal Component Analysis (page 124)) of bands loaded in the Band set.
Principal Component Analysis of Band set
• Number of components : if checked, set the number of calculated components; if unchecked, all
the components are calculated;
• Use NoData value : if checked, set the value of NoData pixels, ignored during the calculation;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Run
• : select an output directory and start the calculation process; principal components are calculated and
saved as raster files; also, the details about the PCA are displayed in the tab Output and saved in a .txt file
in the output directory;
Vector to raster
Fig. 3.33: Vector to raster This tab allows for the conversion of a vector to raster format.
• Select the vector : select a vector already loaded in QGIS;
• : refresh layer list;
• Use the value field of the vector : if checked, the selected field is used as attribute for the conver-
sion; pixels of the output raster have the same values as the vector attribute;
• Use constant value : if checked, the polygons are converted to raster using the selected constant
value;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Select the type of conversion : select the type of conversion between Center of pixels and All pixels touched:
– Center of pixels: during the conversion, vector is compared to the reference raster; output raster
pixels are attributed to a polygon if pixel center is within that polygon;
– All pixels touched: during the conversion, vector is compared to the reference raster; output raster
pixels are attributed to a polygon if pixel touches that polygon;
• Select the reference raster : select a reference raster; pixels of the output raster have the same size and
alignment as the reference raster;
• : refresh layer list;
Run
• : choose the output destination and start the conversion to raster;
3.5.4 Postprocessing The tab Postprocessing provides several functions that can be applied to the classification output.
Accuracy This tab allows for the validation of a classification (read Accuracy Assessment (page 135) ). Classification is compared to a reference raster or reference shapefile (which is automatically converted to raster). If a shapefile is selected as reference, it is possible to choose a field describing class values.
Several statistics are calculated such as overall accuracy, user’s accuracy, producer’s accuracy, and Kappa hat. The output is an error raster that is a .tif file showing the errors in the map, where pixel values represent the categories of comparison (i.e. combinations identified by the ErrorMatrixCode in the error matrix) between the classification and reference. Also, a text file containing the error matrix (i.e. a .csv file separated by tab) is created with the same name defined for the .tif file.
Input
• Select the classification to assess : select a classification raster (already loaded in QGIS);
• : refresh layer list;
• Select the reference shapefile or raster : select a raster or a shapefile (already loaded in QGIS), used as
reference layer (ground truth) for the accuracy assessment;
• : refresh layer list;
• Shapefile field : if a shapefile is selected as reference, select a shapefile field containing numeric class
values;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.34: Accuracy
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Run
• : choose the output destination and start the calculation; the error matrix is displayed in the tab Output
and the error raster is loaded in QGIS;
Land cover change
Fig. 3.35: Land cover change The tab Land cover change allows for the comparison between two classifications in order to assess land cover changes. Output is a land cover change raster (i.e. a file .tif showing the changes in the map,
where each pixel represents a category of comparison (i.e. combinations) between the two classifications, which is the ChangeCode in the land cover change statistics) and a text file containing the land cover change statistics
(i.e. a file .csv separated by tab, with the same name defined for the .tif file).
Input
• Select the reference classification : select a reference classification raster (already loaded in QGIS);
• : refresh layer list;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Select the new classification : select a new classification raster (already loaded in QGIS), to be com-
pared with the reference classification;
• : refresh layer list;
• Report unchanged pixels: if checked, report also unchanged pixels (having the same value in both
classifications);
Run
• : choose the output destination and start the calculation; the land cover change statistics are displayed
in the tab Output (and saved in a text file) and the land cover change raster is loaded in QGIS;
Classification report
Fig. 3.36: Classification report This tab allows for the calculation of class statistics such as number of pixels, percentage and area (area unit is defined from the image itself).
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Input
• Select the classification : select a classification raster (already loaded in QGIS);
• : refresh layer list;
• Use NoData value : if checked, NoData value will be excluded from the report;
Run
• : choose the output destination and start the calculation; the report is saved in a text file and displayed
in the tab Output;
Cross classification
Fig. 3.37: Cross classification This tab allows for the calculation of a cross classification raster and matrix. Classification is compared to a reference raster or reference shapefile (which is automatically converted to raster). This is useful for calculating the area for every combination between reference classes and classification values. If a shapefile is selected as reference, it is possible to choose a field describing class values.
The output is a cross raster that is a .tif file where pixel values represent the categories of comparison
(i.e. combinations identified by the CrossMatrixCode) between the classification and reference. Also, a text
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 file containing the cross matrix (i.e. a .csv file separated by tab) is created with the same name defined for the
.tif file.
Input
• Select the classification : select a classification raster (already loaded in QGIS);
• : refresh layer list;
• Use NoData value : if checked, NoData value will be excluded from the calculation;
• Select the reference shapefile or raster : select a raster or a shapefile (already loaded in QGIS), used as
reference layer;
• : refresh layer list;
• Shapefile field : if a shapefile is selected as reference, select a shapefile field containing numeric class
values;
Run
• : choose the output destination and start the calculation; the cross matrix is displayed in the tab Output
and the cross raster is loaded in QGIS;
Classification to vector This tab allows for the conversion of a classification raster to vector shapefile.
• Select the classification : select a classification raster (already loaded in QGIS);
• : refresh layer list;
Symbology
• Use code from Signature list : if checked, color and class information are defined from ROI Signature list (pa
– MC ID: use the ID of macroclasses;
– C ID: use the ID of classes;
Run
• : choose the output destination and start the conversion;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.38: Classification to vector
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.39: Reclassification
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Reclassification
This tab allows for the reclassification (i.e. assigning a new class code to raster pixels). In particular, it eases the conversion from C ID to MC ID values.
• Select the classification : select a classification raster (already loaded in QGIS);
• : refresh layer list;
Values
• calculate C ID to MC ID values: if checked, the reclassification table is filled according to the ROI
Signature list (page 29) when Calculate unique values is clicked;
• Calculate unique values : calculate unique values in the classification and fill the reclassification table;
• Values: table containing the following fields;
– Old value: set the expression defining old values to be reclassified; Old value can be a value
or an expressions defined using the variable name raster (custom names can be defined in
Variable name for expressions (page 104) ), following Python operators (e.g. raster > 3
select all pixels having value > 3 ; raster > 5 | raster < 2 select all pixels having
value > 5 or < 2 ; raster >= 2 & raster <= 5 select all pixel values between 2 and 5);
– New value: set the new value for the old values defined in Old value;
• : add a row to the table;
• : remove highlighted rows from the table;
Symbology
• Use code from Signature list : if checked, color and class information are defined from ROI Signature list (pa
– MC ID: use the ID of macroclasses;
– C ID: use the ID of classes;
Run
• : choose the output destination and start the calculation; reclassified raster is loaded in QGIS;
Edit raster This tab allows for the direct editing of pixel values in a raster. Only pixels beneath ROI polygons or vector polygons are edited.
Attention: the input raster is directly edited; it is recommended to create a backup copy of the input
raster before using this tool in order to prevent data loss.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.40: Edit raster
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 This tool can rapidly edit large rasters, especially when editing polygons are small, because pixel values are edited directly. In addition, the SCP Edit Toolbar (page 116) is available for easing the raster editing using multiple values.
• Select the input raster : select a raster (already loaded in QGIS);
• : refresh layer list;
Edit raster values
• Edit values using ROI polygons: if checked, raster is edited using temporary ROI polygons in the map;
• Edit values using a vector : if checked, raster is edited using all the polygons of selected vector;
• : refresh layer list;
Edit options
• Use the value field of the vector : if checked, raster is edited using the selected vector (in Edit
values using a vector) and the polygon values of selected vector field;
• Use constant value : if checked, raster is edited using the selected constant value;
• Use expression : if checked, raster is edited according to the entered expression; the expres-
sion must contain one or more where; the following example expression where(raster == 1, 2,
raster) is already entered, which sets 2 where raster equals 1, and leaves unchanged the values where
raster is not equal to 1;
Run
• : undo the last raster edit (available only when using ROI polygons);
• : edit the raster;
Classification sieve This tab allows for the replacement of isolated pixel values with the value of the largest neighbour patch (based on GDAL Sieve ). It is useful for removing small patches from a classification.
• Select the classification : select a raster (already loaded in QGIS);
• : refresh layer list;
• Size threshold : size of the patch to be replaced (in pixel unit); all patches smaller the the selected
number of pixels will be replaced by the value of the largest neighbour patch;
• Pixel connection : select the type of pixel connection:
– 4: in a 3x3 window, diagonal pixels are not considered connected;
– 8: in a 3x3 window, diagonal pixels are considered connected;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.41: Classification sieve
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Run
• : choose the output destination and start the calculation;
Classification erosion
Fig. 3.42: Classification erosion This tab allows for removing the border of a class patch (erosion), defining the class values to be eroded and the number of pixels from the border. It is useful for classification refinement.
• Select the classification : select a raster (already loaded in QGIS);
• : refresh layer list;
• Class values : set the class values to be eroded; class values must be separated by , and - can be used
to define a range of values (e.g. 1, 3-5, 8 will select classes 1, 3, 4, 5, 8); if the text is red then the
expression contains errors;
• Size in pixels : number of pixels to be eroded from the border;
• Pixel connection : select the type of pixel connection:
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– 4: in a 3x3 window, diagonal pixels are not considered connected;
– 8: in a 3x3 window, diagonal pixels are considered connected;
Run
• : choose the output destination and start the calculation;
Classification dilation
Fig. 3.43: Classification dilation This tab allows for dilating the border of a class patch, defining the class values to be dilated and the number of pixels from the border. It is useful for classification refinement.
• Select the classification : select a raster (already loaded in QGIS);
• : refresh layer list;
• Class values : set the class values to be dilated; class values must be separated by , and - can be used
to define a range of values (e.g. 1, 3-5, 8 will select classes 1, 3, 4, 5, 8); if the text is red then the
expression contains errors;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Size in pixels : number of pixels to be dilated from the border;
• Pixel connection : select the type of pixel connection:
– 4: in a 3x3 window, diagonal pixels are not considered connected;
– 8: in a 3x3 window, diagonal pixels are considered connected;
Run
• : choose the output destination and start the calculation;
3.5.5 Band calc
Fig. 3.44: Band calc The Band calc allows for the raster calculation for bands (i.e. calculation of pixel values) using NumPy functions . Raster bands must be already loaded in QGIS. Input rasters must be in the same projection.
In addition, it is possible to calculate a raster using decision rules.
Band list
• Band list: table containing a list of single band rasters (already loaded in QGIS);
– Variable: variable name defined automatically for every band (e.g. raster1, raster2);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– Band name: band name (i.e. the layer name in QGIS);
• : refresh image list;
Expression Enter a mathematical expression for raster bands. In particular, NumPy functions can be used with the prefix np.
(e.g. np.log10(raster1) ). For a list of NumPy functions see the NumPy page .
The expression can work with both Variable and Band name (between double quotes). Also, bands in the Band set
(page 96) can be referenced directly; for example bandset#b1 refers to band 1 of the Band set. Double click on any item in the Band list (page 92) for adding its name to the expression. In addition, the following variables related to Band set (page 96) the are available:
• “#BLUE#”: the band with the center wavelength closest to 0.475 𝜇𝑚;
• “#RED#”: the band with the center wavelength closest to 0.65 𝜇𝑚;
• “#NIR#”: the band with the center wavelength closest to 0.85 𝜇𝑚;
Variables for output name are available:
• #BANDSET#: the name of the first band in the Band set (page 96);
• #DATE#: the current date and time (e.g. 20161110_113846527764);
If text in the Expression is green, then the syntax is correct; if text is red, then the syntax is incorrect and it is not possible to execute the calculation.
It is possible to enter multiple expressions separated by newlines such as the following example:
"raster1" + "raster2"
"raster3" - "raster4"
The above example calculates two new rasters in the output directory with the suffix _1 (e.g. calc_raster_1
) for the first expression and _2 (e.g. calc_raster_2 ) for the second expression. Also, it is possible to define the output name using the symbol @ followed by the name, such as the following example:
"raster1" + "raster2" @ calc_1
"raster3" - "raster4" @ calc_2 The following buttons are available:
• +: plus;
• -: minus;
• *: product;
• / : ratio;
• ^: power;
• V: square-root;
• (: open parenthesis;
• ): close parenthesis;
• >: greater then;
• <: less then;
• ln: natural logarithm;
• 𝜋: pi;
• ==: equal;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• !=: not equal;
• sin: sine;
• asin: inverse sine;
• cos: cosine;
• acos: inverse cosine;
• tan: tangent;
• atan: inverse tangent;
• where: conditional expression according to the syntax where( condition , value if true,
value if false) (e.g. where("raster1" == 1, 2, "raster1"));
• exp: natural exponential;
• nodata: NoData value of raster (e.g. nodata("raster1")); it can be used as value in the expression
(e.g. where("raster1" == nodata("raster1"), 0, "raster1"));
Index calculation Index calculation allows for entering a spectral index expression (see Spectral Indices (page 126)).
• Index calculation : list of spectral indices:
– NDVI: if selected, the NDVI calculation is entered in the Expression ( (( "#NIR#" -
"#RED#") / ( "#NIR#" + "#RED#") @ NDVI) );
– EVI: if selected, the EVI calculation is entered in the Expression ( 2.5 * ( "#NIR#" -
"#RED#" ) / ( "#NIR#" + 6 * "#RED#" - 7.5 * "#BLUE#" + 1) @ EVI );
• : open a text file (.txt) containing custom expressions to be listed in Index calculation; the text file must
contain an expression for each line; each line must be in the form expression_name; expression
(separated by ;) where the expression_name is the expression name that is displayed in the Index
calculation; if you open an empty text file, the default values are restored; following an example of text
content:
NDVI; ( "#NIR#" - "#RED#" ) / ( "#NIR#" + "#RED#" ) @NDVI
EVI; 2.5 * ( "#NIR#" - "#RED#" ) / ( "#NIR#" + 6 * "#RED#" - 7.5 * "#BLUE#" +
˓→1) @EVI
SR; ( "#NIR#" / "#RED#" ) @SR Decision rules Decision rules allows for the calculation of an output raster based on rules. Rules are conditional statements based on other rasters; if the Rule is true, the corresponding Value is assigned to the output pixel.
Rules are verified from the first to the last row in the table; if the first Rule is false, the next Rule is verified for that pixel, until the last rule. If multiple rules are true for a certain pixel, the value of the first Rule is assigned to that pixel. The NoData value is assigned to those pixels where no Rule is true.
• Decision rules: table containing the following fields;
– Value: the value assigned to pixels if the Rule is true;
– Rule: the rule to be verified (e.g. "raster1" > 0); multiple conditional statements can be
entered separated by ; (e.g. "raster1" > 0; "raster2" < 1 which means to set the
Value where raster1 > 0 and raster2 < 1);
• : move highlighted rule up;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• : move highlighted rule down;
• : add a new row to the table;
• : delete the highlighted rows from the table;
• : clear the table;
• : export the rules to a text file that can be imported later;
• : import rules from a text file;
Output raster The output raster is a .tif file, with the same spatial resolution and projection of input rasters; if input rasters have different spatial resolutions, then the highest resolution (i.e. minimum pixel size) is used for output raster.
• Use NoData value : if checked, set the value of NoData pixels in output raster;
• Extent: if the following options are unchecked, the output raster extent will include the extents of all input rasters;
– Intersection: if checked, the extent of output raster equals the intersection of input raster
extents (i.e. minimum extent);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– Same as : if checked, the extent of output raster equals the extent of “Map extent” (the
extent of the map currently displayed) or a selected layer;
• Align: if checked, and Same as is checked selecting a raster, the calculation is performed using
the same extent and pixel alignment of selected raster;
• : if Expression is active and text is green, choose the output destination and start the calculation
based on Expression; if Decision rules is active and text is green, choose the output destination
and start the calculation based on Decision rules;
3.5.6 Band set
Fig. 3.45: Band set This tab allows for the definition of a set of single band rasters (Band set) used as Input image. The Center wavelength of bands should be defined in order to use several functions of SCP.
If a Band set of single band rasters is defined, then the item << band set >> will be listed in the Working toolbar (page 23) as input image.
The Band set definition is saved with the QGIS project.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Band list List of single band rasters loaded in QGIS.
• : open one or more raster file (single band) which are added to the Band set and loaded in QGIS;
• : refresh list of raster bands loaded in QGIS;
• : select all raster bands;
• : add selected rasters to the Band set.
Band set definition Definition of bands composing the input image .
If the Center wavelength of bands is not defined, the band number is used and some SCP tools will be disabled. It is possible to define a multiplicative rescaling factor and additive rescaling factor for each band (for instance using the values in Landsat metadata), which are used on the fly (i.e. pixel value = original pixel value *
multiplicative rescaling factor + additive rescaling factor) during the processing.
• Band set definition: table containing the following fields;
– Band name : name of the band; name cannot be edited;
– Center wavelength : center of the wavelength of the band;
– Multiplicative Factor : multiplicative rescaling factor;
– Additive Factor : additive rescaling factor;
• : move highlighted bands upward;
• : move highlighted bands downward;
• : sort automatically bands by name, giving priority to the ending numbers of name;
• : remove highlighted bands from the Band set;
• : clear all bands from Band set;
• : export the Band set to a file;
• : import a previously saved Band set from file;
• Quick wavelength settings : rapid definition of band center wavelength for the following satellite sensors:
– ASTER;
– GeoEye-1;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– Landsat 8 OLI;
– Landsat 7 ETM+;
– Landsat 5 TM;
– Landsat 4 TM;
– Landsat 1, 2, and 3 MSS;
– MODIS;
– Pleiades;
– QuickBird;
– RapidEye;
– Sentinel-2;
– SPOT 4;
– SPOT 5;
– SPOT 6;
– WorldView-2 and WorldView-3;
• Wavelength unit : select the wavelength unit among:
– Band number: no unit, only band number;
– 𝜇𝑚: micrometres;
– nm: nanometres;
Band set tools It is possible to perform several processes directly on Band set.
• Create virtual raster of band set: if checked, create a virtual raster of bands;
• Create raster of band set (stack bands): if checked, stack all the bands and create a unique .tif raster;
• Build band overviews: if checked, build raster overviews (i.e. pyramids) for improving display perfor-
mance; overview files are created in the same directory as bands;
• Band calc expression: if checked, calculate the Expression (page 93) entered in Band calc (page 92);
it is recommended the use of Band set variables in the expression (e.g. bandset#b1 );
• : choose the output destination and start the process;
3.5.7 Batch This tab allows for the automatic execution (batch) of several SCP functions using a scripting interface.
Batch Enter a batch expression; each function must be in a new line. Functions have the following structure:
function name; function options Each functions has options, identified by a name, with the following structure:
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.46: Batch
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
option name: option argument Options must be separated by the character ; . Each function option represents an option in the corresponding interface of SCP; option arguments of type text must be between the character ' ; in case of checkboxes, the value 1 represents checked, while the value 0 represents unchecked. A new line beginning with # can be used for commenting.
According to the function, some of the options are mandatory while other options can be omitted from the expres-
sion. Option names that contain path require the full path to a file. Some options that require multiple arguments such as lists; lists must be separated by , .
If the expression contains errors, the text is red.
• : clear the expression;
• : export the batch expression to a file;
• : import a previously saved batch expression from file;
Functions: the following functions are available with the corresponding options;
• Accuracy (page 78): calculate accuracy (accuracy;classification_file_path
: '';reference_file_path : '';shapefile_field_name : '';
output_raster_path : '');
• ASTER (page 69): ASTER conversion (aster_conversion;input_raster_path
: '';celsius_temperature : 0;apply_dos1 : 0;use_nodata : 1;
nodata_value : 0;create_bandset : 1;output_dir : '');
• Band calc (page 92): band calculation (band_calc;expression : '';
output_raster_path : '';extent_same_as_raster_name : '';
extent_intersection : 1;set_nodata : 0;nodata_value : 0);
• Classification output (page 34): perform classification (classification;
use_macroclass : 0;algorithm_name : 'Minimum Distance';use_lcs
: 0;use_lcs_algorithm : 0;use_lcs_only_overlap : 0;apply_mask :
0;mask_file_path : '';vector_output : 0;classification_report :
0;save_algorithm_files : 0;output_classification_path : '');
• Classification dilation (page 91): dilation of a classification (classification_dilation;
input_raster_path : '';class_values : '';size_in_pixels : 1;
pixel_connection : 4;output_raster_path : '');
• Classification erosion (page 90): erosion of a classification (classification_erosion;
input_raster_path : '';class_values : '';size_in_pixels : 1;
pixel_connection : 4;output_raster_path : '');
• Classification report (page 81): report of a classification (classification_report;
input_raster_path : '';use_nodata : 0;nodata_value : 0;
output_report_path : '');
• Classification sieve (page 88): classification sieve(classification_sieve;
input_raster_path : '';size_threshold : 2;pixel_connection : 4;
output_raster_path : '');
• Classification to vector (page 83): convert classification to vec-
tor (classification_to_vector;input_raster_path : '';
use_signature_list_code : 1;code_field : 'C_ID';
output_vector_path : '');
• Clip multiple rasters (page 72): clip multiple rasters (clip_multiple_rasters;
input_raster_path : '';output_dir : '';use_shapefile : 0;
shapefile_path : '';ul_x : '';ul_y : '';lr_x : '';lr_y : '';
nodata_value : 0;output_name_prefix : 'clip');
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Cross classification (page 82): cross classification (cross_classification;
classification_file_path : '';use_nodata : 0;nodata_value
: 0;reference_file_path : '';shapefile_field_name : '';
output_raster_path : '');
• Edit raster (page 86): edit raster values using a shapefile); (edit_raster_using_shapefile;
input_raster_path : '';input_vector_path : '';vector_field_name
: '';constant_value : 0;expression : 'where(raster == 1, 2,
raster)');
• Land cover change (page 80): calculate land cover change (land_cover_change;
reference_raster_path : '';new_raster_path : '';
output_raster_path : '');
• Landsat (page 64): Landsat conversion (landsat_conversion;input_dir : '';
mtl_file_path : '';celsius_temperature : 0;apply_dos1 : 0;
use_nodata : 1;nodata_value : 0;pansharpening : 0;create_bandset
: 1;output_dir : '');
• MODIS (page 70): MODIS conversion (modis_conversion;input_raster_path
: '';reproject_wgs84 : 1;use_nodata : 1;nodata_value : -999;
create_bandset : 1;output_dir : '');
• PCA (page 76): Principal Component Analysis (pca;use_number_of_components :
0, number_of_components : 2;use_nodata : 1;nodata_value : 0;
output_dir : '');
• Reclassification (page 86): raster reclassification (reclassification;
input_raster_path : '';value_list : 'oldVal-newVal;
oldVal-newVal';use_signature_list_code : 1;code_field : 'MC_ID';
output_raster_path : '');
• Sentinel-2 (page 68): Sentinel-2 conversion (sentinel_conversion;input_dir :
'';mtd_safl1c_file_path : '';apply_dos1 : 0;use_nodata : 1;
nodata_value : 0;create_bandset : 1;output_dir : '');
• Split raster bands (page 74): split raster to single bands (split_raster_bands;
input_raster_path : '';output_dir : '';output_name_prefix :
'split');
• Stack raster bands (page 75): stack rasters into a single file (stack_raster_bands;
input_raster_path : '';output_raster_path : '');
• Vector to raster (page 77): convert vector to raster (vector_to_raster;vector_file_path
: '';use_value_field : 1;vector_field_name : '';constant_value
: 1;reference_raster_path : '';type_of_conversion : 'Center of
pixels';output_raster_path : '');
In addition, the following functions are available:
• Add raster to QGIS: add a raster to QGIS (add_raster;input_raster_path : '';
input_raster_name : '');
• Create Band set: create a Band set (create_bandset;raster_path_list : '';
center_wavelength : '';wavelength_unit : 1;multiplicative_factor
: '';additive_factor : '');
• Open training input: open a training input file (open_training_input;
training_file_path : '');
• Set working directory: set a working directory (argument is the path to a directory) (!
working_dir!;'');
If a working directory is defined, !working_dir! can be entered in other functions where a path is required (e.g. add_raster;input_raster_path : '!working_dir!/raster1.tif';
input_raster_name : 'raster1.tif'); An example of batch expression is:
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
!working_dir!; '/home/user/Desktop/temp/'
add_raster;input_raster_path : '!working_dir!/raster1.tif';input_raster_name :
˓→'raster1.tif'
band_calc;expression : 'where("raster1.tif" > 1, 1,0)';output_raster_path : '!
˓→working_dir!/calc1.tif';set_nodata : 1;nodata_value : 0 band_calc;expression : '"raster1.tif" * "calc1.tif"';output_raster_path : '!
˓→working_dir!/calc2.tif';extent_intersection : 0 Run
• : if text in the batch expression is green, start the batch processes;
3.5.8 Settings The tab Settings allows for the customization of SCP.
Interface Customization of the interface.
Field names of training input Set the names of fields in the Training input (page 28) . Changing field names should usually be avoided.
• MC ID field : name of the Macroclass ID field (default is MC_ID);
• MC Info field : name of the Macroclass Information field (default is MC_info);
• C ID field : name of the Class ID field (default is C_ID);
• C Info field : name of the Class Information field (default is C_info);
• : reset field names to default;
ROI style Change ROI color and transparency for a better visualization of temporary ROIs on the map.
• ROI color : button for changing ROI color;
• Transparency : change ROI transparency;
• : reset ROI color and transparency to default;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.47: Interface
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Variable name for expressions Set the variable name used in expressions of the Reclassification (page 86) and Edit raster (page 86) .
• Variable name : set variable name (default is raster);
• : reset variable name to default;
Temporary group name Set the temporary group name in QGIS Layers used for temporary layers .
• Group name : set group name (default is Class_temp_group);
• : reset group name to default;
Dock
• Download news on startup: if checked, news about the SCP and related services are downloaded on
startup and displayed in Dock;
Processing Classification process
• Play sound when finished : if checked, play a sound when the classification process is completed;
• Use virtual rasters for temp files : if checked, create virtual rasters for certain temporary files,
instead of creating real rasters; it is useful for reducing disk space usage during calculations;
• Raster compression : if checked, a lossless compression (DEFLATE or PACKBITS) is applied to
raster outputs in order to save disk space; it is recommended to check this option, however compressed files
are sometimes larger than files without compression;
RAM
• Available RAM (MB) : set the available RAM (in MB) that is used during the processes in order
to improve the SCP performance; this value should be half of the system RAM (e.g. 1024MB if system has
2GB of RAM); in case of errors, set a value lower than 512MB;
Temporary directory
• : select a new temporary directory where temporary files are saved during processing; the path to
the current temporary directory is displayed; default is a system temporary directory;
• : reset to default temporary directory;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.48: Processing
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Debug
Debugging utilities for the creation of a Log file (i.e. recording of SCP activities for reporting issues) and testing SCP dependencies.
If you found a plugin error, please read How can I report an error? (page 213) .
Log file
• Records events in a log file : if checked, start recording events in a Log file;
• : export the Log file (i.e. a .txt file);
• : clear the content of Log file;
Test
• Test dependencies : test SCP dependencies (GDAL, GDAL subprocess, NumPy, SciPy, Matplotlib,
Internet connection); results are displayed in a window;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 3.6 Spectral Signature Plot The window Spectral Signature Plot includes several functions for displaying spectral signature values as a func-
tion of wavelength (defined in the Band set (page 96)). Signatures can be added to the Spectral Signature Plot through the SCP dock (page 25).
The window Spectral Signature Plot includes also some functions useful for the definition of value ranges used by the Land Cover Signature Classification (page 131) (see LCS threshold (page 60)).
Overlapping signatures (belonging to different classes or macroclasses) are highlighted in orange in the table Plot Signature list (page 108); the overlapping check is performed considering MC ID or C ID according to the setting Use MC ID C ID in Classification algorithm (page 33). Overlapping signatures sharing the same ID are not highlighted.
Fig. 3.50: Spectral Signature Plot The functions are described in detail in the following paragraphs, using these conventions:
= Input date
= Input text
= List
= Input number
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
= Optional
= Configuration stored in the active project of QGIS
= Configuration stored in QGIS registry
= Slider
= Table 3.6.1 Plot Signature list
• Signature list:
– S: checkbox field; if checked, the spectral signature is displayed in the plot;
– MC ID: signature Macroclass ID;
– MC Info: signature Macroclass Information;
– C ID: signature Class ID;
– C Info: signature Class Information;
– Color [overlap MC_ID-C_ID]: signature color; also, the combination MC ID-C ID is displayed
in case of overlap with other signatures (see Land Cover Signature Classification (page 131));
– Min B X: minimum value of band X; this value can be edited;
– Max B X: maximum value of band X; this value can be edited;
• : remove highlighted signatures from this list;
• : add highlighted spectral signatures to ROI Signature list (page 29);
• : calculate the spectral distances of spectral signatures displayed in the plot; distances are reported in
the tab Spectral distances (page 111);
Automatic thresholds Set thresholds automatically for highlighted signatures in the table Plot Signature list (page 108); if no signature is highlighted, then the threshold is applied to all the signatures.
• Min Max : set the threshold based on the minimum and maximum of each band;
• 𝜎* : set an automatic threshold calculated as (band value + (𝜎 * v)), where 𝜎 is the standard
deviation of each band and v is the defined value;
• : undo the last automatic thresholds;
• From ROI : set the threshold using the temporary ROI pixel values, according to the following checkboxes:
– +: if checked, signature threshold is extended to include pixel signature;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– –: if checked, signature threshold is reduced to exclude pixel signature;
• From pixel : set the threshold by clicking on a pixel, according to the following checkboxes:
– +: if checked, signature threshold is extended to include pixel signature;
– –: if checked, signature threshold is reduced to exclude pixel signature;
Plot Left click and hold inside the plot to move the view of the plot. Use the mouse wheel to zoom in and out the view of the plot. Right click and hold inside the plot to zoom in a specific area of the plot. Legend inside the plot can be moved using the mouse.
Plot commands:
• : automatically fit the plot to data;
• : save the plot image to file (available formats are .jpg, .png, and .pdf);
• : activate the cursor for interactively changing the value range of highlighted signatures in the plot;
click the plot to set the minimum or maximum value of a band (also for several signatures simultaneously);
cursor is deactivated when moving outside the plot area;
• Plot value range: if checked, plot the value range for each signature (semi-transparent area);
• Band lines: if checked, display a vertical line for each band (center wavelength);
• Grid: if checked, display a grid;
• Max characters : set the maximum length of text in the legend;
• x y: display x y coordinates of mouse cursor inside the plot;
Fig. 3.51: Spectral Signature: Example of spectral signature plot Signature details Display the details about spectral signatures (i.e. Wavelength, Values, and Standard deviation). In case of signa-
tures calculated from ROIs, the ROI size (number of pixels) is also displayed.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.52: Spectral Signature: Signature details
Fig. 3.53: Spectral Signature: Example of signature details
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Spectral distances
Fig. 3.54: Spectral Signature: Spectral distances Display spectral distances of signatures (see Plot Signature list (page 108)), which are useful for assessing ROI separability (see Spectral Distance (page 133)).
The following spectral distances are calculated :
• Jeffries-Matusita Distance (page 133): range [0 = identical, 2 = different]; useful in particular for
Maximum Likelihood (page 129) classifications;
• Spectral Angle (page 134): range [0 = identical, 90 = different]; useful in particular for Spectral Angle
Mapping (page 130) classifications;
• Euclidean Distance (page 134): useful in particular for Minimum Distance (page 129) classifications;
• Bray-Curtis Similarity (page 135): range [0 = different, 100 = identical]; useful in general;
Values are displayed in red if signatures are particularly similar.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.55: Spectral Signature: Example of spectral distances 3.7 Scatter Plot The window Scatter plot displays pixel values for two raster bands as points in the 2D space. Scatter plots are useful for assessing ROI separability between two bands.
The functions are described in detail in the following paragraphs, using these conventions:
= Input date
= Input text
= List
= Input number
= Optional
= Configuration stored in the active project of QGIS
= Configuration stored in QGIS registry
= Slider
= Table 3.7.1 Scatter list
• Scatter list:
– S: checkbox field; if checked, the spectral signature is displayed in the plot;
– MC ID: signature Macroclass ID;
– MC Info: signature Macroclass Information;
– C ID: signature Class ID;
– C Info: signature Class Information;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 3.56: Scatter Plot
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– Color: color field; double click to select a color for the plot;
• Band X : X band of the plot;
• Band Y : Y band of the plot;
• Precision : use custom precision for calculation (precision should be selected according to pixel values):
– 4 = 10−4
– 3 = 10−3
– 2 = 10−2
– 1 = 10−1
– 0=1
– -1 = 10
– -2 = 102
– -3 = 103
• Calculate : calculate the scatter plot for the ROIs checked in the list;
• : remove highlighted signatures from this list;
• : add a temporary scatter plot to the list (as MC Info = tempScatter) and start the plot calcula-
tion of the last temporary ROI (see Working toolbar (page 23));
• : add a temporary scatter plot to the list (as MC Info = tempScatter) and start the plot calcula-
tion of pixels in current display extent;
• : add a temporary scatter plot to the list (as MC Info = tempScatter) and start the plot calcula-
tion of the entire image;
WARNING: Using a precision value that is too high can result in slow calculation or failure.
Scatter raster This tool allows for the drawing of selection polygons inside the scatter plot; these selection polygons are used for creating a Scatter raster that is a temporary raster classified according to the intersection of scatter plots and drawn polygons.
Pixels of the Input image (page 28) are classified, according to scatter plot bands, if pixel values are in the range of intersection between scatter plots and selection polygons (polygons should not overlap). The value assigned to the Scatter raster pixels is the sequential number of selection polygon; also the raster color is derived from the selection polygon.
After the creation of a new Scatter raster, old rasters are placed in QGIS Layers inside a layer group named Class_temp_group (custom name can be defined in Temporary group name (page 104)) and are deleted when the QGIS session is closed.
• : activate the cursor for interactively drawing a polygon in the plot; left click on the plot to define the
vertices and right click to define the last vertex closing the polygon;
• color: select the color of polygon (which is used also in the Scatter raster);
• : remove all the selection polygons from the plot;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• : calculate the Scatter raster and display it in the map;
• : calculate the spectral signature of the Scatter raster (considering all the classified pixels) using the
Input image (page 28), and save the signature to the ROI Signature list (page 29);
• Extent : extent of the Scatter raster; available options are:
– Same as display: extent is the same as map display;
– Same as image: extent is the same as the whole image;
Plot Left click and hold inside the plot to move the view of the plot. Use the mouse wheel to zoom in and out the view of the plot. Right click and hold inside the plot to zoom in a specific area of the plot.
• Colormap : select a colormap that is applied to highlighted scatter plots in the list when is
clicked; if no scatter plot is highlighted then the colormap is applied to all the scatter plots;
• : automatically fit the plot to data;
• : save the plot image to file (available formats are .jpg, .png, and .pdf);
• x y: display x y coordinates of mouse cursor inside the plot;
Fig. 3.57: Example Scatter Plot
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 3.8 SCP Edit Toolbar
Fig. 3.58: SCP Tools The SCP Edit Toolbar allows for the direct editing of pixel values in the input raster defined in Edit raster (page 86)
using ROI polygons. Only pixels beneath ROI polygons are edited.
• : open the tool SCP Edit Toolbar for selecting the input raster;
• : edit the raster using the selected constant value;
• : edit the raster using the selected constant value;
• : edit the raster using the selected constant value;
• : undo the last raster edit (available only when using ROI polygons);
CHAPTER 4
Brief Introduction to Remote Sensing
• Basic Definitions (page 118)
– GIS definition (page 118)
– Remote Sensing definition (page 118)
– Sensors (page 120)
– Radiance and Reflectance (page 120)
– Spectral Signature (page 120)
– Landsat Satellite (page 120)
– Sentinel-2 Satellite (page 122)
– ASTER Satellite (page 122)
– MODIS Products (page 123)
– Color Composite (page 123)
– Principal Component Analysis (page 124)
– Pan-sharpening (page 125)
– Spectral Indices (page 126)
• Supervised Classification Definitions (page 126)
– Land Cover (page 126)
– Supervised Classification (page 126)
– Training Areas (page 126)
– Classes and Macroclasses (page 127)
– Classification Algorithms (page 128)
– Spectral Distance (page 133)
– Classification Result (page 135)
– Accuracy Assessment (page 135)
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Image conversion to reflectance (page 136)
– Radiance at the Sensor’s Aperture (page 136)
– Top Of Atmosphere (TOA) Reflectance (page 136)
– Surface Reflectance (page 137)
– DOS1 Correction (page 137)
• Conversion to Temperature (page 139)
– Conversion to At-Satellite Brightness Temperature (page 140)
– Estimation of Land Surface Temperature (page 140)
• References (page 141)
4.1 Basic Definitions This chapter provides basic definitions about GIS and remote sensing. For other useful resources see Free and valuable resources about remote sensing and GIS (page 217).
4.1.1 GIS definition There are several definitions of GIS (Geographic Information Systems), which is not simply a program. In general,
GIS are systems that allow for the use of geographic information (data have spatial coordinates). In particular,
GIS allow for the view, query, calculation and analysis of spatial data, which are mainly distinguished in raster or vector data structures. Vector is made of objects that can be points, lines or polygons, and each object can have one ore more attribute values; a raster is a grid (or image) where each cell has an attribute value (Fisher and Unwin, 2005). Several GIS applications use raster images that are derived from remote sensing.
4.1.2 Remote Sensing definition A general definition of Remote Sensing is “the science and technology by which the characteristics of objects of interest can be identified, measured or analyzed the characteristics without direct contact” (JARS, 1993).
Usually, remote sensing is the measurement of the energy that is emanated from the Earth’s surface. If the source of the measured energy is the sun, then it is called passive remote sensing, and the result of this measurement can be a digital image (Richards and Jia, 2006). If the measured energy is not emitted by the Sun but from the sensor platform then it is defined as active remote sensing, such as radar sensors which work in the microwave range
(Richards and Jia, 2006).
The electromagnetic spectrum is “the system that classifies, according to wavelength, all energy (from short cosmic to long radio) that moves, harmonically, at the constant velocity of light” (NASA, 2013). Passive sensors measure energy from the optical regions of the electromagnetic spectrum: visible, near infrared (i.e. IR), short-
wave IR, and thermal IR (see Figure Electromagnetic-Spectrum (page 119)).
The interaction between solar energy and materials depends on the wavelength; solar energy goes from the Sun to the Earth and then to the sensor. Along this path, solar energy is (NASA, 2013):
• Transmitted - The energy passes through with a change in velocity as determined by the index of refraction
for the two media in question.
• Absorbed - The energy is given up to the object through electron or molecular reactions.
• Reflected - The energy is returned unchanged with the angle of incidence equal to the angle of reflec-
tion. Reflectance is the ratio of reflected energy to that incident on a body. The wavelength reflected (not
absorbed) determines the color of an object.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 4.1: Electromagnetic-Spectrum
by <NAME> (SVG version of File:Electromagnetic-Spectrum.png)
4.1. Basic Definitions
[CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)] 119
via Wikimedia Commons
http://commons.wikimedia.org/wiki/File%3AElectromagnetic-Spectrum.svg
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Scattered - The direction of energy propagation is randomly changed. Rayleigh and Mie scatter are the two
most important types of scatter in the atmosphere.
• Emitted - Actually, the energy is first absorbed, then re-emitted, usually at longer wavelengths. The object
heats up.
4.1.3 Sensors Sensors can be on board of airplanes or on board of satellites, measuring the electromagnetic radiation at specific ranges (usually called bands). As a result, the measures are quantized and converted into a digital image, where each picture elements (i.e. pixel) has a discrete value in units of Digital Number (DN) (NASA, 2013). The resulting images have different characteristics (resolutions) depending on the sensor. There are several kinds of resolutions:
• Spatial resolution, usually measured in pixel size, “is the resolving power of an instrument needed for the
discrimination of features and is based on detector size, focal length, and sensor altitude” (NASA, 2013);
spatial resolution is also referred to as geometric resolution or IFOV;
• Spectral resolution, is the number and location in the electromagnetic spectrum (defined by two wave-
lengths) of the spectral bands (NASA, 2013) in multispectral sensors, for each band corresponds an image;
• Radiometric resolution, usually measured in bits (binary digits), is the range of available brightness values,
which in the image correspond to the maximum range of DNs; for example an image with 8 bit resolution
has 256 levels of brightness (Richards and Jia, 2006);
• For satellites sensors, there is also the temporal resolution, which is the time required for revisiting the
same area of the Earth (NASA, 2013).
4.1.4 Radiance and Reflectance Sensors measure the radiance, which corresponds to the brightness in a given direction toward the sensor; it useful to define also the reflectance as the ratio of reflected versus total power energy.
4.1.5 Spectral Signature The spectral signature is the reflectance as a function of wavelength (see Figure Spectral Reflectance Curves of Four Different Targets (page 121)); each material has a unique signature, therefore it can be used for material classification (NASA, 2013).
4.1.6 Landsat Satellite Landsat is a set of multispectral satellites developed by the NASA (National Aeronautics and Space Administra-
tion of USA), since the early 1970’s.
Landsat images are very used for environmental research. The resolutions of Landsat 4 and Landsat 5 sensors are reported in the following table (from http://landsat.usgs.gov/band_designations_landsat_satellites.php); also,
Landsat temporal resolution is 16 days (NASA, 2013).
Landsat 4 and Landsat 5 Bands
Landsat 4, Landsat 5 Bands Wavelength [micrometers] Resolution [meters]
Band 1 - Blue 0.45 - 0.52 30
Band 2 - Green 0.52 - 0.60 30
Band 3 - Red 0.63 - 0.69 30
Band 4 - Near Infrared (NIR) 0.76 - 0.90 30
Band 5 - SWIR 1.55 - 1.75 30
Band 6 - Thermal Infrared 10.40 - 12.50 120 (resampled to 30)
Band 7 - SWIR 2.08 - 2.35 30
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 4.2: Spectral Reflectance Curves of Four Different Targets
(from NASA, 2013)
The resolutions of Landsat 7 sensor are reported in the following table (from http://landsat.usgs.gov/band_ designations_landsat_satellites.php); also, Landsat temporal resolution is 16 days (NASA, 2013).
Landsat 7 Bands
Landsat 7 Bands Wavelength [micrometers] Resolution [meters]
Band 1 - Blue 0.45 - 0.52 30
Band 2 - Green 0.52 - 0.60 30
Band 3 - Red 0.63 - 0.69 30
Band 4 - Near Infrared (NIR) 0.77 - 0.90 30
Band 5 - SWIR 1.57 - 1.75 30
Band 6 - Thermal Infrared 10.40 - 12.50 60 (resampled to 30)
Band 7 - SWIR 2.09 - 2.35 30
Band 8 - Panchromatic 0.52 - 0.90 15 The resolutions of Landsat 8 sensor are reported in the following table (from http://landsat.usgs.gov/band_ designations_landsat_satellites.php); also, Landsat temporal resolution is 16 days (NASA, 2013).
Landsat 8 Bands
Landsat 8 Bands Wavelength [micrometers] Resolution [meters]
Band 1 - Coastal aerosol 0.43 - 0.45 30
Band 2 - Blue 0.45 - 0.51 30
Band 3 - Green 0.53 - 0.59 30
Band 4 - Red 0.64 - 0.67 30
Band 5 - Near Infrared (NIR) 0.85 - 0.88 30
Band 6 - SWIR 1 1.57 - 1.65 30
Band 7 - SWIR 2 2.11 - 2.29 30
Band 8 - Panchromatic 0.50 - 0.68 15
Band 9 - Cirrus 1.36 - 1.38 30
Band 10 - Thermal Infrared (TIRS) 1 10.60 - 11.19 100 (resampled to 30)
Band 11 - Thermal Infrared (TIRS) 2 11.50 - 12.51 100 (resampled to 30)
A vast archive of images is freely available from the U.S. Geological Survey . For more information about how to
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 freely download Landsat images read this .
Images are identified with the paths and rows of the WRS (Worldwide Reference System for Landsat ).
4.1.7 Sentinel-2 Satellite Sentinel-2 is a multispectral satellite developed by the European Space Agency (ESA) in the frame of Copernicus land monitoring services. Sentinel-2 acquires 13 spectral bands with the spatial resolution of 10m, 20m and 60m depending on the band, as illustrated in the following table (ESA, 2015).
Sentinel-2 Bands
Sentinel-2 Bands Central Wavelength [micrometers] Resolution [meters]
Band 1 - Coastal aerosol 0.443 60
Band 2 - Blue 0.490 10
Band 3 - Green 0.560 10
Band 4 - Red 0.665 10
Band 5 - Vegetation Red Edge 0.705 20
Band 6 - Vegetation Red Edge 0.740 20
Band 7 - Vegetation Red Edge 0.783 20
Band 8 - NIR 0.842 10
Band 8A - Vegetation Red Edge 0.865 20
Band 9 - Water vapour 0.945 60
Band 10 - SWIR - Cirrus 1.375 60
Band 11 - SWIR 1.610 20
Band 12 - SWIR 2.190 20 Sentinel-2 images are freely available from the ESA website https://scihub.esa.int/dhus/ .
4.1.8 ASTER Satellite The ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) satellite was launched in 1999 by a collaboration between the Japanese Ministry of International Trade and Industry (MITI) and the NASA.
ASTER has 14 bands whose spatial resolution varies with wavelength: 15m in the visible and near-infrared, 30m in the short wave infrared, and 90m in the thermal infrared (USGS, 2015). ASTER bands are illustrated in the following table (due to a sensor failure SWIR data acquired since April 1, 2008 is not available ). An additional band 3B (backwardlooking near-infrared) provides stereo coverage.
ASTER Bands
ASTER Bands Wavelength [micrometers] Resolution [meters]
Band 1 - Green 0.52 - 0.60 15
Band 2 - Red 0.63 - 0.69 15
Band 3N - Near Infrared (NIR) 0.78 - 0.86 15
Band 4 - SWIR 1 1.60 - 1.70 30
Band 5 - SWIR 2 2.145 - 2.185 30
Band 6 - SWIR 3 2.185 - 2.225 30
Band 7 - SWIR 4 2.235 - 2.285 30
Band 8 - SWIR 5 2.295 - 2.365 30
Band 9 - SWIR 6 2.360 - 2.430 30
Band 10 - TIR 1 8.125 - 8.475 90
Band 11 - TIR 2 8.475 - 8.825 90
Band 12 - TIR 3 8.925 - 9.275 90
Band 13 - TIR 4 10.25 - 10.95 90
Band 14 - TIR 5 10.95 - 11.65 90
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 4.1.9 MODIS Products The MODIS (Moderate Resolution Imaging Spectroradiometer) is an instrument operating on the Terra and Aqua satellites launched by NASA in 1999 and 2002 respectively. Its temporal resolutions allows for viewing the entire Earth surface every one to two days, with a swath width of 2,330. Its sensors measure 36 spectral bands at three spatial resolutions: 250m, 500m, and 1,000m (see https://lpdaac.usgs.gov/dataset_discovery/modis).
Several products are available, such as surface reflectance and vegetation indices. In this manual we are con-
sidering the surface reflectance bands available at 250m and 500m spatial resolution (Vermote, Roger, & Ray,
2015).
MODIS Bands
MODIS Bands Wavelength [micrometers] Resolution [meters]
Band 1 - Red 0.62 - 0.67 250 - 500
Band 2 - Near Infrared (NIR) 0.841 - 0.876 250 - 500
Band 3 - Blue 0.459 - 0.479 500
Band 4 - Green 0.545 - 0.565 500
Band 5 - SWIR 1 1.230 - 1.250 500
Band 6 - SWIR 2 1.628 - 1.652 500
Band 7 - SWIR 3 2.105 - 2.155 500 The following products (Version 6, see https://lpdaac.usgs.gov/dataset_discovery/modis/modis_products_table)
are available for download (Vermote, Roger, & Ray, 2015):
• MOD09GQ: daily reflectance at 250m spatial resolution from Terra MODIS;
• MYD09GQ: daily reflectance at 250m spatial resolution from Aqua MODIS;
• MOD09GA: daily reflectance at 500m spatial resolution from Terra MODIS;
• MYD09GA: daily reflectance at 500m spatial resolution from Aqua MODIS;
• MOD09Q1: reflectance at 250m spatial resolution, which is a composite of MOD09GQ (each pixel contains
the best possible observation during an 8-day period);
• MYD09Q1: reflectance at 250m spatial resolution, which is a composite of MYD09GQ (each pixel contains
the best possible observation during an 8-day period);
• MOD09A1: reflectance at 250m spatial resolution, which is a composite of MOD09GA (each pixel contains
the best possible observation during an 8-day period);
• MYD09A1: reflectance at 250m spatial resolution, which is a composite of MYD09GA (each pixel contains
the best possible observation during an 8-day period);
4.1.10 Color Composite Often, a combination is created of three individual monochrome images, in which each is assigned a given color;
this is defined color composite and is useful for photo interpretation (NASA, 2013). Color composites are usually expressed as:
“R G B = Br Bg Bb”
where:
• R stands for Red;
• G stands for Green;
• B stands for Blue;
• Br is the band number associated to the Red color;
• Bg is the band number associated to the Green color;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Bb is the band number associated to the Blue color.
The following Figure Color composite of a Landsat 8 image (page 124) shows a color composite “R G B = 4 3 2”
of a Landsat 8 image (for Landsat 7 the same color composite is R G B = 3 2 1; for Sentinel-2 is R G B = 4 3 2) and a color composite “R G B = 5 4 3” (for Landsat 7 the same color composite is R G B = 4 3 2; for Sentinel-2 is R G B = 8 4 3). The composite “R G B = 5 4 3” is useful for the interpretation of the image because vegetation pixels appear red (healthy vegetation reflects a large part of the incident light in the near-infrared wavelength, resulting in higher reflectance values for band 5, thus higher values for the associated color red).
Fig. 4.3: Color composite of a Landsat 8 image
Data available from the U.S. Geological Survey 4.1.11 Principal Component Analysis Principal Component Analysis (PCA) is a method for reducing the dimensions of measured variables (bands) to the principal components (JARS, 1993).
Th principal component transformation provides a new set of bands (principal components) having the following characteristic: principal components are uncorrelated; each component has variance less than the previous com-
ponent. Therefore, this is an efficient method for extracting information and data compression (Ready and Wintz,
1973).
Given an image with N spectral bands, the principal components are obtained by matrix calculation (Ready and Wintz, 1973; Richards and Jia, 2006):
𝑌 = 𝐷𝑡 𝑋 where:
• 𝑌 = vector of principal components
• 𝐷 = matrix of eigenvectors of the covariance matrix 𝐶𝑥 in X space
• 𝑡 denotes vector transpose And 𝑋 is calculated as:
𝑋 =𝑃 −𝑀
• 𝑃 = vector of spectral values associated with each pixel
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• 𝑀 = vector of the mean associated with each band Thus, the mean of 𝑋 associated with each band is 0. 𝐷 is formed by the eigenvectors (of the covariance matrix 𝐶𝑥 ) ordered as the eigenvalues from maximum to minimum, in order to have the maximum variance in the first component. This way, the principal components are uncorrelated and each component has variance less than the previous component(Ready and Wintz, 1973).
Usually the first two components contain more than the 90% of the variance. For example, the first principal components can be displayed in a Color Composite (page 123) for highlighting Land Cover (page 126) classes, or used as input for Supervised Classification (page 126).
4.1.12 Pan-sharpening Pan-sharpening is the combination of the spectral information of multispectral bands (MS), which have lower spatial resolution (for Landsat bands, spatial resolution is 30m), with the spatial resolution of a panchromatic band (PAN), which for Landsat 7 and 8 it is 15m. The result is a multispectral image with the spatial resolution of the panchromatic band (e.g. 15m). In SCP, a Brovey Transform is applied, where the pan-sharpened values of each multispectral band are calculated as (Johnson, Tateishi and Hoan, 2012):
𝑀 𝑆𝑝𝑎𝑛 = 𝑀 𝑆 * 𝑃 𝐴𝑁/𝐼 where 𝐼 is Intensity, which is a function of multispectral bands.
The following weights for I are defined, basing on several tests performed using the SCP. For Landsat 8, Intensity is calculated as:
𝐼 = (0.42 * 𝐵𝑙𝑢𝑒 + 0.98 * 𝐺𝑟𝑒𝑒𝑛 + 0.6 * 𝑅𝑒𝑑)/2 For Landsat 7, Intensity is calculated as:
𝐼 = (0.42 * 𝐵𝑙𝑢𝑒 + 0.98 * 𝐺𝑟𝑒𝑒𝑛 + 0.6 * 𝑅𝑒𝑑 + 𝑁 𝐼𝑅)/3 Fig. 4.4: Example of pan-sharpening of a Landsat 8 image. Left, original multispectral bands (30m); right,
pan-sharpened bands (15m)
Data available from the U.S. Geological Survey
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 4.1.13 Spectral Indices Spectral indices are operations between spectral bands that are useful for extracting information such as vegetation cover (JARS, 1993). One of the most popular spectral indices is the Normalized Difference Vegetation Index
(NDVI), defined as (JARS, 1993):
𝑁 𝐷𝑉 𝐼 = (𝑁 𝐼𝑅 − 𝑅𝑒𝑑)/(𝑁 𝐼𝑅 + 𝑅𝑒𝑑)
NDVI values range from -1 to 1. Dense and healthy vegetation show higher values, while non-vegetated areas show low NDVI values.
Another index is the Enhanced Vegetation Index (EVI) which attempts to account for atmospheric effects such as path radiance calculating the difference between the blue and the red bands (Didan,et al., 2015). EVI is defined as:
𝐸𝑉 𝐼 = 𝐺(𝑁 𝐼𝑅 − 𝑅𝑒𝑑)/(𝑁 𝐼𝑅 + 𝐶1 𝑅𝑒𝑑 − 𝐶2 𝐵𝑙𝑢𝑒 + 𝐿)
where: 𝐺 is a scaling factor, 𝐶1 and 𝐶2 are coefficients for the atmospheric effects, and 𝐿 is a factor for accounting the differential NIR and Red radiant transfer through the canopy. Typical coefficient values are: 𝐺 = 2.5, 𝐿 = 1,
𝐶1 = 6, 𝐶2 = 7.5 (Didan,et al., 2015).
4.2 Supervised Classification Definitions This chapter provides basic definitions about supervised classifications.
4.2.1 Land Cover Land cover is the material at the ground, such as soil, vegetation, water, asphalt, etc. (Fisher and Unwin, 2005).
Depending on the sensor resolutions, the number and kind of land cover classes that can be identified in the image can vary significantly.
4.2.2 Supervised Classification A semi-automatic classification (also supervised classification) is an image processing technique that allows for the identification of materials in an image, according to their spectral signatures. There are several kinds of classification algorithms, but the general purpose is to produce a thematic map of the land cover.
Image processing and GIS spatial analyses require specific software such as the Semi-Automatic Classification Plugin for QGIS.
4.2.3 Training Areas Usually, supervised classifications require the user to select one or more Regions of Interest (ROIs, also Training Areas) for each land cover class identified in the image. ROIs are polygons drawn over homogeneous areas of the image that overlay pixels belonging to the same land cover class.
Region Growing Algorithm The Region Growing Algorithm allows to select pixels similar to a seed one, considering the spectral similarity
(i.e. spectral distance) of adjacent pixels. In SCP the Region Growing Algorithm is available for the training area creation. The parameter distance is related to the similarity of pixel values (the lower the value, the more similar are selected pixels) to the seed one (i.e. selected clicking on a pixel). An additional parameter is the maximum width, which is the side length of a square, centred at the seed pixel, which inscribes the training area (if all the pixels had the same value, the training area would be this square). The minimum size is used a constraint (for
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 4.5: A multispectral image processed to produce a land cover classification
(Landsat image provided by USGS)
every single band), selecting at least the pixels that are more similar to the seed one until the number of selected pixels equals the minimum size.
In figure Region growing example (page 127) the central pixel is used as seed (image a) for the region growing of one band (image b) with the parameter spectral distance = 0.1; similar pixels are selected to create the training area (image c and image d).
Fig. 4.6: Region growing example 4.2.4 Classes and Macroclasses Land cover classes are identified with an arbitrary ID code (i.e. Identifier). SCP allows for the definition of Macroclass ID (i.e. MC ID) and Class ID (i.e. C ID), which are the identification codes of land cover classes.
A Macroclass is a group of ROIs having different Class ID, which is useful when one needs to classify materials that have different spectral signatures in the same land cover class. For instance, one can identify grass (e.g. ID
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 class = 1 and Macroclass ID = 1 ) and trees (e.g. ID class = 2 and Macroclass ID = 1 ) as vegetation class (e.g. Macroclass ID = 1 ). Multiple Class IDs can be assigned to the same Macroclass ID,
but the same Class ID cannot be assigned to multiple Macroclass IDs, as shown in the following table.
Example of Macroclasses
Macroclass name Macroclass ID Class name Class ID
Vegetation 1 Grass 1
Vegetation 1 Trees 2
Built-up 2 Buildings 3
Built-up 2 Roads 4 Therefore, Classes are subsets of a Macroclass as illustrated in Figure Macroclass example (page 128).
Fig. 4.7: Macroclass example If the use of Macroclass is not required for the study purpose, then the same Macroclass ID can be defined for all the ROIs (e.g. Macroclass ID = 1) and Macroclass values are ignored in the classification process.
4.2.5 Classification Algorithms The spectral signatures (spectral characteristics) of reference land cover classes are calculated considering the values of pixels under each ROI having the same Class ID (or Macroclass ID). Therefore, the classification algo-
rithm classifies the whole image by comparing the spectral characteristics of each pixel to the spectral character-
istics of reference land cover classes. SCP implements the following classification algorithms.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Minimum Distance Minimum Distance algorithm calculates the Euclidean distance 𝑑(𝑥, 𝑦) between spectral signatures of image pixels and training spectral signatures, according to the following equation:
⎯
⎸ 𝑛
⎸∑︁
𝑑(𝑥, 𝑦) = ⎷ (𝑥𝑖 − 𝑦𝑖 )2 where:
• 𝑥 = spectral signature vector of an image pixel;
• 𝑦 = spectral signature vector of a training area;
• 𝑛 = number of image bands.
Therefore, the distance is calculated for every pixel in the image, assigning the class of the spectral signature that is closer, according to the following discriminant function (adapted from Richards and Jia, 2006):
𝑥 ∈ 𝐶𝑘 ⇐⇒ 𝑑(𝑥, 𝑦𝑘 ) < 𝑑(𝑥, 𝑦𝑗 )∀𝑘 ̸= 𝑗 where:
• 𝐶𝑘 = land cover class 𝑘;
• 𝑦𝑘 = spectral signature of class 𝑘;
• 𝑦𝑗 = spectral signature of class 𝑗.
It is possible to define a threshold 𝑇𝑖 in order to exclude pixels below this value from the classification:
𝑥 ∈ 𝐶𝑘 ⇐⇒ 𝑑(𝑥, 𝑦𝑘 ) < 𝑑(𝑥, 𝑦𝑗 )∀𝑘 ̸= 𝑗
𝑎𝑛𝑑
𝑑(𝑥, 𝑦𝑘 ) < 𝑇𝑖 Maximum Likelihood Maximum Likelihood algorithm calculates the probability distributions for the classes, related to Bayes’ theorem,
estimating if a pixel belongs to a land cover class. In particular, the probability distributions for the classes are assumed the of form of multivariate normal models (Richards & Jia, 2006). In order to use this algorithm, a sufficient number of pixels is required for each training area allowing for the calculation of the covariance matrix.
The discriminant function, described by Richards and Jia (2006), is calculated for every pixel as:
𝑔𝑘 (𝑥) = ln 𝑝(𝐶𝑘 ) − ln |Σ𝑘 | − (𝑥 − 𝑦𝑘 )𝑡 Σ−1
𝑘 (𝑥 − 𝑦𝑘 )
where:
• 𝐶𝑘 = land cover class 𝑘;
• 𝑥 = spectral signature vector of a image pixel;
• 𝑝(𝐶𝑘 ) = probability that the correct class is 𝐶𝑘 ;
• |Σ𝑘 | = determinant of the covariance matrix of the data in class 𝐶𝑘 ;
• Σ−1
𝑘 = inverse of the covariance matrix;
• 𝑦𝑘 = spectral signature vector of class 𝑘.
Therefore:
𝑥 ∈ 𝐶𝑘 ⇐⇒ 𝑔𝑘 (𝑥) > 𝑔𝑗 (𝑥)∀𝑘 ̸= 𝑗
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 4.8: Maximum Likelihood example In addition, it is possible to define a threshold to the discriminant function in order to exclude pixels below this value from the classification. Considering a threshold 𝑇𝑖 the classification condition becomes:
𝑥 ∈ 𝐶𝑘 ⇐⇒ 𝑔𝑘 (𝑥) > 𝑔𝑗 (𝑥)∀𝑘 ̸= 𝑗
𝑎𝑛𝑑
𝑔𝑘 (𝑥) > 𝑇𝑖 Maximum likelihood is one of the most common supervised classifications, however the classification process can be slower than Minimum Distance (page 129).
Spectral Angle Mapping The Spectral Angle Mapping calculates the spectral angle between spectral signatures of image pixels and training spectral signatures. The spectral angle 𝜃 is defined as (Kruse et al., 1993):
(︃ ∑︀𝑛 )︃
𝜃(𝑥, 𝑦) = cos ∑︀𝑛 1 ∑︀𝑛 1 Where:
• 𝑥 = spectral signature vector of an image pixel;
• 𝑦 = spectral signature vector of a training area;
• 𝑛 = number of image bands.
Therefore a pixel belongs to the class having the lowest angle, that is:
𝑥 ∈ 𝐶𝑘 ⇐⇒ 𝜃(𝑥, 𝑦𝑘 ) < 𝜃(𝑥, 𝑦𝑗 )∀𝑘 ̸= 𝑗 where:
• 𝐶𝑘 = land cover class 𝑘;
• 𝑦𝑘 = spectral signature of class 𝑘;
• 𝑦𝑗 = spectral signature of class 𝑗.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 4.9: Spectral Angle Mapping example In order to exclude pixels below this value from the classification it is possible to define a threshold 𝑇𝑖 :
𝑥 ∈ 𝐶𝑘 ⇐⇒ 𝜃(𝑥, 𝑦𝑘 ) < 𝜃(𝑥, 𝑦𝑗 )∀𝑘 ̸= 𝑗
𝑎𝑛𝑑
𝜃(𝑥, 𝑦𝑘 ) < 𝑇𝑖 Spectral Angle Mapping is largely used, especially with hyperspectral data.
Parallelepiped Classification Parallelepiped classification is an algorithm that considers a range of values for each band, forming a multidi-
mensional parallelepiped that defines a land cover class. A pixel is classified if the values thereof are inside a parallelepiped. One of the major drawbacks is that pixels whose signatures lie in the overlapping area of two or more parallelepipeds cannot be classified (Richards and Jia, 2006).
Land Cover Signature Classification Land Cover Signature Classification is available in SCP (see Land Cover Signature Classification (page 34)). This classification allows for the definition of spectral thresholds for each training input signature (a minimum value and a maximum value for each band). The thresholds of each training input signature define a spectral region belonging to a certain land cover class.
Spectral signatures of image pixels are compared to the training spectral signatures; a pixel belongs to class X if pixel spectral signature is completely contained in the spectral region defined by class X. In case of pixels falling inside overlapping regions or outside any spectral region, it is possible to use additional classification algo-
rithms (i.e. Minimum Distance (page 129), Maximum Likelihood (page 129), Spectral Angle Mapping (page 130))
considering the spectral characteristics of the original input signature.
In the following image, a scheme illustrates the Land Cover Signature Classification for a simple case of two spectral bands 𝑥 and 𝑦. User defined spectral regions define three classes (𝑔𝑎 , 𝑔𝑏 , and 𝑔𝑐 ). Point 𝑝1 belongs to class 𝑔𝑎 and point 𝑝2 belongs to class 𝑔𝑏 . However, point 𝑝3 is inside the spectral regions of both classes 𝑔𝑏
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 and 𝑔𝑐 (overlapping regions); in this case, point 𝑝3 will be unclassified or classified according to an additional classification algorithm. Point 𝑝4 is outside any spectral region, therefore it will be unclassified or classified according to an additional classification algorithm. Given that point 𝑝4 belongs to class 𝑔𝑐 , the spectral region thereof could be extended to include point 𝑝4 .
Fig. 4.10: Land cover signature classification This is similar to Parallelepiped Classification (page 131), with the exception that spectral regions are defined by user, and can be assigned independently for the upper and lower bounds. One can imagine spectral regions as the set of all the spectral signatures of pixels belonging to one class.
In figure Plot of spectral ranges (page 133) the spectral ranges of three classes (𝑔𝑎 , 𝑔𝑏 , and 𝑔𝑐 ) are displayed; the colored lines inside the ranges (i.e. semi-transparent area) represent the spectral signatures of pixels that defined the upper and lower bounds of the respective ranges. Pixel 𝑝1 (dotted line) belongs to class 𝑔𝑏 because the spectral signature thereof is completely inside the range of class 𝑔𝑏 (in the upper limit); pixel 𝑝2 (dashed line) is unclassified because the spectral signature does not fall completely inside any range; pixel 𝑝3 (dotted line) belongs to class 𝑔𝑎 .
It is worth noticing that these spectral thresholds can be applied to any spectral signature, regardless of spectral characteristics thereof; this function can be very useful for separating similar spectral signatures that differ only in one band, defining thresholds that include or exclude specific signatures. In fact, classes are correctly separated if the spectral ranges thereof are not overlapping at least in one band. Of course, even if spectral regions are overlapping, chances are that no pixel will fall inside the overlapping region and be misclassified; which is the upper (or lower) bound of a range do not imply the existence, in the image, of any spectral signature having the maximum (or minimum) range values for all the bands (for instance pixel 𝑝1 of figure Plot of spectral ranges
(page 133) could not exist).
One of the main benefit of the Land Cover Signature Classification is that it is possible to select pixels and and include the signature thereof in a spectral range; therefore, the classification should be the direct representation of the class expected for every spectral signature. This is very suitable for the classification of a single land cover class (defined by specific spectral thresholds), and leave unclassified the rest of the image that is of no interest for the purpose of the classification.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 4.11: Plot of spectral ranges Algorithm raster An algorithm raster represents the “distance” (according to the definition of the classification algorithm) of an image pixel to a specific spectral signature.
In general, an algorithm raster is produced for every spectral signature used as training input. The value of every pixel is the result of the algorithm calculation for a specific spectral signature. Therefore, a pixel belongs to class X if the value of the algorithm raster corresponding to class X is the lowest in case of Minimum Distance
(page 129) or Spectral Angle Mapping (page 130) (or highest in case of Maximum Likelihood (page 129)).
Given a classification, a combination of algorithm rasters can be produced, in order to create a raster with the lowest “distances” (i.e. pixels have the value of the algorithm raster corresponding to the class they belong in the classification). Therefore, this raster can be useful to identify pixels that require the collection of more similar spectral signatures (see Classification preview (page 24)).
4.2.6 Spectral Distance It is useful to evaluate the spectral distance (or separability) between training signatures or pixels, in order to assess if different classes that are too similar could cause classification errors. The SCP implements the following algorithms for assessing similarity of spectral signatures.
Jeffries-Matusita Distance Jeffries-Matusita Distance calculates the separability of a pair of probability distributions. This can be particularly meaningful for evaluating the results of Maximum Likelihood (page 129) classifications.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 The Jeffries-Matusita Distance 𝐽𝑥𝑦 is calculated as (<NAME>, 2006):
(︀ )︀
where:
)︃
| |
(︂
1 Σ𝑥 + Σ 𝑦 1
𝐵 = (𝑥 − 𝑦)𝑡 (𝑥 − 𝑦) + ln 1
8 2 2 |Σ𝑥 | 2 |Σ𝑦 | 2 where:
• 𝑥 = first spectral signature vector;
• 𝑦 = second spectral signature vector;
• Σ𝑥 = covariance matrix of sample 𝑥;
• Σ𝑦 = covariance matrix of sample 𝑦;
The Jeffries-Matusita Distance is asymptotic to 2 when signatures are completely different, and tends to 0 when signatures are identical.
Spectral Angle The Spectral Angle is the most appropriate for assessing the Spectral Angle Mapping (page 130) algorithm. The spectral angle 𝜃 is defined as (Kruse et al., 1993):
(︃ ∑︀𝑛 )︃
𝜃(𝑥, 𝑦) = cos ∑︀𝑛 1 ∑︀𝑛 1 Where:
• 𝑥 = spectral signature vector of an image pixel;
• 𝑦 = spectral signature vector of a training area;
• 𝑛 = number of image bands.
Spectral angle goes from 0 when signatures are identical to 90 when signatures are completely different.
Euclidean Distance The Euclidean Distance is particularly useful for the evaluating the result of Minimum Distance (page 129) clas-
sifications. In fact, the distance is defined as:
⎯
⎸ 𝑛
⎸∑︁
𝑑(𝑥, 𝑦) = ⎷ (𝑥𝑖 − 𝑦𝑖 )2 where:
• 𝑥 = first spectral signature vector;
• 𝑦 = second spectral signature vector;
• 𝑛 = number of image bands.
The Euclidean Distance is 0 when signatures are identical and tends to increase according to the spectral distance of signatures.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Bray-Curtis Similarity The Bray-Curtis Similarity is a statistic used for assessing the relationship between two samples (read this). It is useful in general for assessing the similarity of spectral signatures, and Bray-Curtis Similarity 𝑆(𝑥, 𝑦) is calculated as:
(︂ ∑︀𝑛 )︂
− 𝑦𝑖 )|
𝑆(𝑥, 𝑦) = 100 − ∑︀ 𝑛 𝑛 * 100 where:
• 𝑥 = first spectral signature vector;
• 𝑦 = second spectral signature vector;
• 𝑛 = number of image bands.
The Bray-Curtis similarity is calculated as percentage and ranges from 0 when signatures are completely different to 100 when spectral signatures are identical.
4.2.7 Classification Result The result of the classification process is a raster (see an example of Landsat classification in Figure Landsat classification (page 135)), where pixel values correspond to class IDs and each color represent a land cover class.
Fig. 4.12: Landsat classification
Data available from the U.S. Geological Survey A certain amount of errors can occur in the land cover classification (i.e. pixels assigned to a wrong land cover class), due to spectral similarity of classes, or wrong class definition during the ROI collection.
4.2.8 Accuracy Assessment After the classification process, it is useful to assess the accuracy of land cover classification, in order to identify and measure map errors. Usually, accuracy assessment is performed with the calculation of an error matrix,
which is a table that compares map information with reference data (i.e. ground truth data) for a number of sample areas (Congalton and Green, 2009).
The following table is a scheme of error matrix, where k is the number of classes identified in the land cover classification, and n is the total number of collected sample units. The items in the major diagonal (aii) are the number of samples correctly identified, while the other items are classification error.
Scheme of Error Matrix
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Ground truth 1 Ground truth 2 ... Ground truth k Total
Class 1 𝑎11 𝑎12 ... 𝑎1𝑘 𝑎1+
Class 2 𝑎21 𝑎22 ... 𝑎2𝑘 𝑎2+
... ... ... ... ... ...
Class k 𝑎𝑘1 𝑎𝑘2 ... 𝑎𝑘𝑘 𝑎𝑘+
Total 𝑎+1 𝑎+2 ... 𝑎+𝑘 𝑛 Therefore, it is possible to calculate the overall accuracy as the ratio between the number of samples that are correctly classified (the sum of the major diagonal), and the total number of sample units n (Congalton and Green,
2009).
For further information, the following documentation is freely available: Landsat 7 Science Data User’s Hand-
book, Remote Sensing Note , or Wikipedia.
4.3 Image conversion to reflectance This chapter provides information about the conversion to reflectance implemented in SCP.
4.3.1 Radiance at the Sensor’s Aperture Radiance is the “flux of energy (primarily irradiant or incident energy) per solid angle leaving a unit surface area in a given direction”, “Radiance is what is measured at the sensor and is somewhat dependent on reflectance”
(NASA, 2011, p. 47).
Images such as Landsat or Sentinel-2 are composed of several bands and a metadata file which contains informa-
tion required for the conversion to reflectance.
Landsat images are provided in radiance, scaled prior to output. for Landsat images Spectral Radiance at the sensor’s aperture (𝐿𝜆 , measured in [watts/(meter squared * ster * 𝜇𝑚)]) is given by (https://landsat.usgs.gov/
Landsat8_Using_Product.php):
𝐿𝜆 = 𝑀𝐿 * 𝑄𝑐𝑎𝑙 + 𝐴𝐿 where:
• 𝑀𝐿 = Band-specific multiplicative rescaling factor from Landsat metadata (RADI-
ANCE_MULT_BAND_x, where x is the band number)
• 𝐴𝐿 = Band-specific additive rescaling factor from Landsat metadata (RADIANCE_ADD_BAND_x, where
x is the band number)
• 𝑄𝑐𝑎𝑙 = Quantized and calibrated standard product pixel values (DN)
Sentinel-2 images (Level-1C) are already provided in Top Of Atmosphere (TOA) Reflectance (page 136), scaled prior to output (ESA, 2015).
4.3.2 Top Of Atmosphere (TOA) Reflectance Images in radiance can be converted to Top Of Atmosphere (TOA) Reflectance (combined surface and atmospheric reflectance) in order to reduce the in between-scene variability through a normalization for solar irradiance. This TOA reflectance (𝜌𝑝 ), which is the unitless ratio of reflected versus total power energy (NASA, 2011), is calculated by:
𝜌𝑝 = (𝜋 * 𝐿𝜆 * 𝑑2 )/(𝐸𝑆𝑈 𝑁𝜆 * 𝑐𝑜𝑠𝜃𝑠 )
where:
• 𝐿𝜆 = Spectral radiance at the sensor’s aperture (at-satellite radiance)
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• 𝑑 = Earth-Sun distance in astronomical units (provided with Landsat 8 metadata file, and an excel file is
available from http://landsathandbook.gsfc.nasa.gov/excel_docs/d.xls)
• 𝐸𝑆𝑈 𝑁𝜆 = Mean solar exo-atmospheric irradiances
• 𝜃𝑠 = Solar zenith angle in degrees, which is equal to 𝜃𝑠 = 90° - 𝜃𝑒 where 𝜃𝑒 is the Sun elevation It is worth pointing out that Landsat 8 images are provided with band-specific rescaling factors that allow for the direct conversion from DN to TOA reflectance.
Sentinel-2 images are already provided in scaled TOA reflectance, which can be converted to TOA reflectance with a simple calculation using the Quantification Value provided in the metadata (see https://sentinel.esa.int/
documents/247904/349490/S2_MSI_Product_Specification.pdf).
4.3.3 Surface Reflectance The effects of the atmosphere (i.e. a disturbance on the reflectance that varies with the wavelength) should be considered in order to measure the reflectance at the ground.
As described by Moran et al. (1992), the land surface reflectance (𝜌) is:
𝜌 = [𝜋 * (𝐿𝜆 − 𝐿𝑝 ) * 𝑑2 ]/[𝑇𝑣 * ((𝐸𝑆𝑈 𝑁𝜆 * 𝑐𝑜𝑠𝜃𝑠 * 𝑇𝑧 ) + 𝐸𝑑𝑜𝑤𝑛 )]
where:
• 𝐿𝑝 is the path radiance
• 𝑇𝑣 is the atmospheric transmittance in the viewing direction
• 𝑇𝑧 is the atmospheric transmittance in the illumination direction
• 𝐸𝑑𝑜𝑤𝑛 is the downwelling diffuse irradiance Therefore, we need several atmospheric measurements in order to calculate 𝜌 (physically-based corrections).
Alternatively, it is possible to use image-based techniques for the calculation of these parameters, without in-situ measurements during image acquisition. It is worth mentioning that Landsat Surface Reflectance High Level Data Products for Landsat 8 are available (for more information read http://landsat.usgs.gov/CDR_LSR.php).
4.3.4 DOS1 Correction The Dark Object Subtraction (DOS) is a family of image-based atmospheric corrections. Chavez (1996) explains that “the basic assumption is that within the image some pixels are in complete shadow and their radiances received at the satellite are due to atmospheric scattering (path radiance). This assumption is combined with the fact that very few targets on the Earth’s surface are absolute black, so an assumed one-percent minimum reflectance is better than zero percent”. It is worth pointing out that the accuracy of image-based techniques is generally lower than physically-based corrections, but they are very useful when no atmospheric measurements are available as they can improve the estimation of land surface reflectance. The path radiance is given by (Sobrino, et al., 2004):
𝐿𝑝 = 𝐿𝑚𝑖𝑛 − 𝐿𝐷𝑂1%
where:
• 𝐿𝑚𝑖𝑛 = “radiance that corresponds to a digital count value for which the sum of all the pixels with digital
counts lower or equal to this value is equal to the 0.01% of all the pixels from the image considered”
(Sobrino, et al., 2004, p. 437), therefore the radiance obtained with that digital count value (𝐷𝑁𝑚𝑖𝑛 )
• 𝐿𝐷𝑂1% = radiance of Dark Object, assumed to have a reflectance value of 0.01 In particular for Landsat images:
𝐿𝑚𝑖𝑛 = 𝑀𝐿 * 𝐷𝑁𝑚𝑖𝑛 + 𝐴𝐿 Sentinel-2 images are converted to radiance prior to DOS1 calculation.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 The radiance of Dark Object is given by (Sobrino, et al., 2004):
𝐿𝐷𝑂1% = 0.01 * [(𝐸𝑆𝑈 𝑁𝜆 * 𝑐𝑜𝑠𝜃𝑠 * 𝑇𝑧 ) + 𝐸𝑑𝑜𝑤𝑛 ] * 𝑇𝑣 /(𝜋 * 𝑑2 )
Therefore the path radiance is:
𝐿𝑝 = 𝑀𝐿 * 𝐷𝑁𝑚𝑖𝑛 + 𝐴𝐿 − 0.01 * [(𝐸𝑆𝑈 𝑁𝜆 * 𝑐𝑜𝑠𝜃𝑠 * 𝑇𝑧 ) + 𝐸𝑑𝑜𝑤𝑛 ] * 𝑇𝑣 /(𝜋 * 𝑑2 )
There are several DOS techniques (e.g. DOS1, DOS2, DOS3, DOS4), based on different assumption about 𝑇𝑣 ,
𝑇𝑧 , and 𝐸𝑑𝑜𝑤𝑛 . The simplest technique is the DOS1, where the following assumptions are made (Moran et al.,
1992):
• 𝑇𝑣 = 1
• 𝑇𝑧 = 1
• 𝐸𝑑𝑜𝑤𝑛 = 0 Therefore the path radiance is:
𝐿𝑝 = 𝑀𝐿 * 𝐷𝑁𝑚𝑖𝑛 + 𝐴𝐿 − 0.01 * 𝐸𝑆𝑈 𝑁𝜆 * 𝑐𝑜𝑠𝜃𝑠 /(𝜋 * 𝑑2 )
And the resulting land surface reflectance is given by:
𝜌 = [𝜋 * (𝐿𝜆 − 𝐿𝑝 ) * 𝑑2 ]/(𝐸𝑆𝑈 𝑁𝜆 * 𝑐𝑜𝑠𝜃𝑠 )
ESUN [W /(m2 * 𝜇𝑚)] values for Landsat sensors are provided in the following table.
ESUN values for Landsat bands
Band Landsat 1 Landsat 2 Landsat 3 Landsat 4 Landsat 5 Landsat 7
MSS* MSS* MSS* TM* TM* ETM+**
4 1823 1829 1839 1028 1031 1044
5 1559 1539 1555 219.8 220 225.7
6 1276 1268 1291
7 880.1 886.6 887.9 83.49 83.44 82.06
* from Chander, Markham, & Helder (2009)
** from http://landsathandbook.gsfc.nasa.gov/data_prod/prog_sect11_3.html For Landsat 8, 𝐸𝑆𝑈 𝑁 can be calculated as (from http://grass.osgeo.org/grass65/manuals/i.landsat.toar.html):
𝐸𝑆𝑈 𝑁 = (𝜋 * 𝑑2 ) * 𝑅𝐴𝐷𝐼𝐴𝑁 𝐶𝐸_𝑀 𝐴𝑋𝐼𝑀 𝑈 𝑀/𝑅𝐸𝐹 𝐿𝐸𝐶𝑇 𝐴𝑁 𝐶𝐸_𝑀 𝐴𝑋𝐼𝑀 𝑈 𝑀 where RADIANCE_MAXIMUM and REFLECTANCE_MAXIMUM are provided by image metadata.
ESUN [W /(m2 * 𝜇𝑚)] values for Sentinel-2 sensor (provided in image metadata) are illustrated in the following table.
ESUN values for Sentinel-2 bands
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Band Sentinel-2
1 1913.57
2 1941.63
3 1822.61
4 1512.79
5 1425.56
6 1288.32
7 1163.19
8 1036.39
8A 955.19
9 813.04
10 367.15
11 245.59
12 85.25 ESUN [W /(m2 * 𝜇𝑚)] values for ASTER sensor are illustrated in the following table (from Finn, et al., 2012).
ESUN values for ASTER bands
Band ASTER
1 1848
2 1549
3 1114
4 225.4
5 86.63
6 81.85
7 74.85
8 66.49
9 59.85 An example of comparison of to TOA reflectance, DOS1 corrected reflectance and the Landsat Surface Reflectance High Level Data Products (ground truth) is provided in Figure Spectral signatures of a built-up pixel (page 139).
Fig. 4.13: Spectral signatures of a built-up pixel
Comparison of TOA reflectance, DOS1 corrected reflectance and Landsat Surface
Reflectance High Level Data Products 4.4 Conversion to Temperature This chapter provides the basic information about the conversion to At-Satellite Brightness Temperature imple-
mented in SCP and the estimation of Land Surface Temperature.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 4.4.1 Conversion to At-Satellite Brightness Temperature For thermal bands, the conversion of DN to At-Satellite Brightness Temperature is given by (from https://landsat.
usgs.gov/Landsat8_Using_Product.php):
𝑇𝐵 = 𝐾2 /𝑙𝑛[(𝐾1 /𝐿𝜆 ) + 1]
where:
• 𝐾1 = Band-specific thermal conversion constant (in watts/meter squared * ster * 𝜇𝑚)
• 𝐾2 = Band-specific thermal conversion constant (in kelvin)
and 𝐿𝜆 is the Spectral Radiance at the sensor’s aperture, measured in watts/(meter squared * ster * 𝜇𝑚).
The 𝐾1 and 𝐾2 constants for Landsat sensors are provided in the following table.
Thermal Conversion Constants for Landsat
Constant Landsat 4* Landsat 5* Landsat 7**
𝐾1 671.62 607.76 666.09
𝐾2 1284.30 1260.56 1282.71
* from Chander & Markham (2003)
** from NASA (2011)
For Landsat 8, the 𝐾1 and 𝐾2 values are provided in the image metadata file.
𝐾1 and 𝐾2 are calculated as (Jimenez-Munoz & Sobrino, 2010):
where (Mohr, Newell, & Taylor, 2015):
• 𝑐1 = first radiation constant = 1.191 * 10−16 𝑊 𝑚2 𝑠𝑟−1
• 𝑐2 = second radiation constant = 1.4388 * 10−2 𝑚𝐾 Therefore, for ASTER bands 𝐾1 and 𝐾2 are provided in the following table.
Thermal Conversion Constants for ASTER
Constant Band 10 Band 11 Band 12 Band 13 Band 14
𝐾1 3.024 * 103 2.460 * 103 1.909 * 103 8.900 * 102 6.464 * 102
𝐾2 1.733 * 103 1.663 * 103 1.581 * 103 1.357 * 103 1.273 * 103 4.4.2 Estimation of Land Surface Temperature Several studies have described the estimation of Land Surface Temperature. Land Surface Temperature can be calculated from At-Satellite Brightness Temperature 𝑇𝐵 as (Weng, et al. 2004):
𝑇 = 𝑇𝐵 /[1 + (𝜆 * 𝑇𝐵 /𝑐2 ) * 𝑙𝑛(𝑒)]
where:
• 𝜆 = wavelength of emitted radiance
• 𝑐2 = ℎ * 𝑐/𝑠 = 1.4388 * 10−2 m K
• ℎ = Planck’s constant = 6.626 * 10−34 J s
• 𝑠 = Boltzmann constant = 1.38 * 10−23 J/K
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• 𝑐 = velocity of light = 2.998 * 108 m/s The values of 𝜆 for the thermal bands of Landsat and ASTER satellites can be calculated from the tables in Landsat Satellite (page 120) and ASTER Satellite (page 122).
Several studies used NDVI for the estimation of land surface emissivity (Sobrino, et al., 2004); other studies used a land cover classification for the definition of the land surface emissivity of each class (Weng, et al. 2004). For instance, the emissivity (𝑒) values of various land cover types are provided in the following table (from Mallick,
et al. 2012).
Emissivity values
Land surface Emissivity e
Soil 0.928
Grass 0.982
Asphalt 0.942
Concrete 0.937 4.5 References
• <NAME>. & <NAME>. 2003. Revised Landsat-5 TM radiometric calibration procedures and postcali-
bration dynamic ranges Geoscience and Remote Sensing, IEEE Transactions on, 41, 2674 - 2677
• <NAME>. 1996. Image-Based Atmospheric Corrections - Revisited and Improved Photogrammetric
Engineering and Remote Sensing, [Falls Church, Va.] American Society of Photogrammetry, 62, 1025-
1036
• <NAME>. and <NAME>., 2009. Assessing the Accuracy of Remotely Sensed Data: Principles and
Practices. Boca Raton, FL: CRC Press
• <NAME>.; <NAME>.; <NAME>. & <NAME>. 2015. MODIS Vegetation Index User’s Guide.
Collection 6, NASA
• ESA, 2015. Sentinel-2 User Handbook. Available at https://sentinel.esa.int/documents/247904/685211/
Sentinel-2_User_Handbook
• <NAME>., <NAME>, and <NAME>. 2012. A Straight Forward Guide for Pro-
cessing Radiance and Reflectance for EO-1 ALI, Landsat 5 TM, Landsat 7 ETM+, and
ASTER. Unpublished Report from USGS/Center of Excellence for Geospatial Information Science,
8 p, http://cegis.usgs.gov/soil_moisture/pdf/A%20Straight%20Forward%20guide%20for%20Processing%
20Radiance%20and%20Reflectance_V_24Jul12.pdf
• <NAME>. and <NAME>., eds. 2005. Representing GIS. Chichester, England: John Wiley & Sons
• JARS, 1993. Remote Sensing Note. Japan Association on Remote Sensing. Available at http://www.
jars1974.net/pdf/rsnote_e.html
• <NAME>. & <NAME>. 2010. A Single-Channel Algorithm for Land-Surface Temperature
Retrieval From ASTER Data IEEE Geoscience and Remote Sensing Letters, 7, 176-179
• <NAME>., <NAME>. and <NAME>., 2012. Satellite Image Pansharpening Using a Hybrid Approach
for Object-Based Image Analysis ISPRS International Journal of Geo-Information, 1, 228. Available at
http://www.mdpi.com/2220-9964/1/3/228)
• <NAME>., et al., 1993. The Spectral Image Processing System (SIPS) - Interactive Visualization and
Analysis of Imaging spectrometer. Data Remote Sensing of Environment
• <NAME>.; <NAME>.; <NAME>.; <NAME>. & <NAME>. 2012. Land surface emissivity re-
trieval based on moisture index from LANDSAT TM satellite data over heterogeneous surfaces of Delhi
city International Journal of Applied Earth Observation and Geoinformation, 19, 348 - 358
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• <NAME>.; <NAME>. & <NAME>. 2015. CODATA Recommended Values of the Fundamental
Physical Constants: 2014 National Institute of Standards and Technology, Committee on Data for Science
and Technology
• <NAME>.; <NAME>.; <NAME>. & <NAME>. 1992. Evaluation of simplified procedures for retrieval of
land surface reflectance factors from satellite sensor output Remote Sensing of Environment, 41, 169-184
• NASA (Ed.) 2011. Landsat 7 Science Data Users Handbook Landsat Project Science Office at NASA’s God-
dard Space Flight Center in Greenbelt, 186 http://landsathandbook.gsfc.nasa.gov/pdfs/Landsat7_Handbook.
pdf
• NASA, 2013. Landsat 7 Science Data User’s Handbook. Available at http://landsathandbook.gsfc.nasa.gov
• <NAME>. and <NAME>., 1973. Information Extraction, SNR Improvement, and Data Compression in Mul-
tispectral Imagery. IEEE Transactions on Communications, 21, 1123-1131
• <NAME>. and <NAME>., 2006. Remote Sensing Digital Image Analysis: An Introduction. Berlin,
Germany: Springer.
• <NAME>.; <NAME>. & <NAME>. 2004. Land surface temperature retrieval from LANDSAT
TM 5 Remote Sensing of Environment, Elsevier, 90, 434-440
• USGS, 2015. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Level 1 Pre-
cision Terrain Corrected Registered At-Sensor Radiance Product (AST_L1T). AST_L1T Product User’s
Guide. USGS EROS Data Center.
• <NAME>.; <NAME>. & <NAME>. 2015. MODIS Surface Reflectance User’s Guide. Collection 6,
NASA
• <NAME>.; <NAME>. & <NAME>. 2004. Estimation of land surface temperature–vegetation abundance
relationship for urban heat island studies. Remote Sensing of Environment, Elsevier Science Inc., Box 882
New York NY 10159 USA, 89, 467-483
CHAPTER 5
Basic Tutorials The following are very basic tutorials for land cover classification using the Semi-Automatic Classification Plugin
(SCP). It is assumed that you have a basic knowledge of QGIS (you can find a guide to QGIS interface at this page).
5.1 Tutorial 1 The following is very basic tutorials for land cover classification using the Semi-Automatic Classification Plugin
(SCP). It is assumed that you have a basic knowledge of QGIS.
• Tutorial 1: Your First Land Cover Classification (page 143)
– Data (page 144)
– Set the Input Image in SCP (page 144)
– Create the Training Input File (page 145)
– Create the ROIs (page 145)
– Create a Classification Preview (page 151)
– Create the Classification Output (page 151)
5.1.1 Tutorial 1: Your First Land Cover Classification This is a basic tutorial about the use of SCP for the classification of a multi-spectral image. It is recommended to read the Brief Introduction to Remote Sensing (page 117) before this tutorial.
The purpose of the classification is to identify the following land cover classes:
1. Water;
2. Built-up;
3. Vegetation;
4. Bare soil.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Following the video of this tutorial.
http://www.youtube.com/watch?v=GFrDgQ6Nzqs Data
Download the image from this archive (data available from the U.S. Geological Survey) and unzip the down-
loaded file.
The downloaded file is actually a Landsat Satellite (page 120) image (pan-sharpened) including the following bands:
1. Blue;
2. Green;
3. Red;
4. Near-Infrared;
5. Short Wavelength Infrared 1;
6. Short Wavelength Infrared 2.
In this tutorial we pretend this dataset is a generic multi-spectral raster in order to focus on the classification process (in the next tutorial we are going to use an image whose bands are single rasters).
Set the Input Image in SCP Start QGIS. In the SCP input (page 28) click the button of the Input image (page 28), in order to select the file sample_image.tif. Once selected, sample_image.tif is set as Input image, the image is displayed in the map and bands are loaded in the Band set (page 96).
We can display a Color Composite (page 123) of bands: Near-Infrared, Red, and Green: in the Working tool-
bar (page 23), click the list RGB= and select the item 4-3-2 (corresponding to the band numbers in Band set
(page 96)). You can see that image colors in the map change according to the selected bands, and vegetation is highlighted in red (if the item 3-2-1 was selected, natural colors would be displayed).
Fig. 5.1: Color composite RGB=4-3-2
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Create the Training Input File Now we need to create the Training input (page 28) in order to collect Training Areas (page 126) (ROIs) and calculate the Spectral Signature (page 120) thereof (which are used in classification).
In the SCP dock (page 25) click the button and define a name (e.g. training.scp ) in order to create the Training input. The path of the file is displayed in Training input. A vector is added to QGIS layers with the same name as the Training input (in order to prevent data loss, you should not edit this layer using QGIS functions).
Fig. 5.2: Definition of Training input in SCP Create the ROIs We are going to create ROIs defining the Classes and Macroclasses (page 127). Each ROI identifies a land cover class through a Class ID. The Class ID codes used in this tutorial are illustrated in the following table (for now we assign the same code to Class ID and Macroclass ID).
Macroclasses
Class name Class ID
Water 1
Built-up 2
Vegetation 3
Bare soil 4 ROIs can be created by manually drawing a polygon or with an automatic region growing algorithm.
Zoom in the map over the dark area (it is a lake) in the lower right region of the image. In order to create manually a ROI inside the dark area, click the button in the Working toolbar (page 23) (you can ignore a message about wavelength unit not provided). Left click on the map to define the ROI vertices and right click to define the last vertex closing the polygon. An orange semi-transparent polygon is displayed over the image, which is a temporary polygon (i.e. it is not saved in the Training input).
TIP : You can draw temporary polygons (the previous one will be overridden) until the shape covers
the intended area.
If the shape of the temporary polygon is good we can save it to the Training input.
Open the Classification dock (page 29) to define the Classes and Macroclasses (page 127) . In the ROI creation
(page 30) set MC ID = 1 and MC Info = Water; also set C ID = 1 and C Info = Lake. Now click to save the ROI in the Training input.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.3: A temporary ROI created manually After a few seconds, the ROI is listed in the ROI Signature list (page 29) and the spectral signature is calculated
(because Calculate sig. was checked).
As you can see, the C ID in ROI creation (page 30) is automatically increased by 1. Saved ROI is displayed as a dark polygon in the map and the temporary ROI is removed. Also, in the ROI Signature list (page 29) you can notice that the Type is B, meaning that the ROI spectral signature was calculated and saved in the Training input.
Now we are going to create a second ROI for the built-up class using the automatic region growing algorithm.
Zoom in the map over the blue area in the upper left region of the image.
In Working toolbar (page 23) set the Dist value to 0.08 . Click the button in the Working toolbar (page 23)
and click over the blue area of the map. After a while the orange semi-transparent polygon is displayed over the image.
TIP : Dist value should be set according to the range of pixel values; in general, increasing this value
creates larger ROIs.
In the ROI creation (page 30) set MC ID = 2 and MC Info = Built-up ; also set C ID = 2 (it should be already set) and C Info = Buildings.
Again, the C ID in ROI creation (page 30) is automatically increased by 1.
Create a ROI for the class Vegetation (red pixels in color composite RGB=4-3-2) and a ROI for the class Bare soil (green pixels in color composite RGB=4-3-2) following the same steps described previously. The following images show a few examples of these classes identified in the map.
The following examples display a few RGB color composites of Landsat images.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.4: The ROI saved in the Training input
Fig. 5.5: A temporary ROI created with the automatic region growing algorithm
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.6: The ROI saved in the Training input
Fig. 5.7: Vegetation
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.8: Bare soil
Fig. 5.9: Built-up ROI: large buildings
Fig. 5.10: Built-up ROI: road
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.11: Built-up ROI: buildings and narrow roads
Fig. 5.12: Bare soil ROI: uncultivated land
Fig. 5.13: Vegetation ROI: deciduous trees
Fig. 5.14: Vegetation ROI: crop
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Create a Classification Preview The classification process is based on collected ROIs (and spectral signatures thereof). It is useful to create a Classification preview (page 24) in order to assess the results (influenced by spectral signatures) before the final classification. In case the results are not good, we can collect more ROIs to better classify land cover.
Before running a classification (or a preview), set the color of land cover classes that will be displayed in the classification raster. In the ROI Signature list (page 29), double click the color (in the column Color) of each ROI to choose a representative color of each class.
Fig. 5.15: Definition of class colors Now we need to select the classification algorithm. In this tutorial we are going to select the Spectral Angle Mapping (page 130).
In Classification algorithm (page 33) select the Spectral Angle Mapping Algorithm (page 33). In Classification preview (page 24) set Size = 500; click the button and then left click a point of the image in the map. The classification process should be rapid, and the result is a classified square centered in clicked point.
Previews are temporary rasters (deleted after QGIS is closed) placed in a group named Class_temp_group in the QGIS panel Layers.
TIP : When loading a previously saved QGIS project, a message could ask to handle missing layers,
which are temporary layers that SCP creates during each session and are deleted afterwards; you can
click Cancel and ignore these layers.
In general, it is good to perform a classification preview every time a ROI (or a spectral signature) is added to the ROI Signature list (page 29). Therefore, the phases Create the ROIs (page 145) and Create a Classification Preview (page 151) should be iterative and concurrent processes.
Create the Classification Output Assuming that the results of classification previews were good (i.e. pixels are assigned to the correct class defined in the ROI Signature list (page 29)), we can perform the actual land cover classification of the whole image.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.16: Classification preview displayed over the image In the Classification output (page 34) click the button and define the path of the classification output, which is a raster file (.tif). If Play sound when finished is checked in Classification process (page 104) settings, a sound is played when the process is finished.
Fig. 5.17: Result of the land cover classification Well done! You have just performed your first land cover classification.
Water and vegetation are correctly identified. However, you can see that there are several classification errors (es-
pecially soil classified as built-up and vice versa), because the number of ROIs (spectral signatures) is insufficient.
We can improve the classification using some of the tools described in the next tutorial.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.18: Example of error: Bare soil classified as Built-up 5.2 Tutorial 2
• Tutorial 2: Land Cover Classification of Sentinel-2 Images (page 153)
– Data Download (page 154)
– Automatic Conversion to Surface Reflectance (page 156)
– Clip the Data (page 158)
– Create the Band Set (page 160)
– Create the ROIs (page 160)
– Create a Classification Preview (page 166)
– Assess Spectral Signatures (page 168)
– Create the Classification Output (page 170)
5.2.1 Tutorial 2: Land Cover Classification of Sentinel-2 Images This tutorial describes the main phases for the classification of images acquired by Sentinel-2 Satellite (page 122).
In addition, some of the SCP tools are illustrated.
We are going to classify the following land cover classes:
1. Water;
2. Built-up;
3. Vegetation;
4. Bare soil.
Following the video of this tutorial.
http://www.youtube.com/watch?v=FcETq8OWM0k
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Data Download We are going to download a Sentinel-2 image provided by the Copernicus Scientific Data Hub. In particular we are going to use the following Sentinel-2 bands (for more information read Sentinel-2 Satellite (page 122)):
• Band 2 - Blue;
• Band 3 - Green;
• Band 4 - Red;
• Band 5 - Vegetation Red Edge;
• Band 6 - Vegetation Red Edge;
• Band 7 - Vegetation Red Edge;
• Band 8 - NIR;
• Band 8A - Vegetation Red Edge;
• Band 11 - SWIR;
• Band 12 - SWIR;
TIP : In case you have slow internet connection you can download a subset of the image (about
50MB) from this link (© Copernicus Sentinel data 2016) which is the result of steps Data Down-
load (page 154) and Clip the Data (page 158).
Start a new QGIS project. Open the tab Download images (page 37) clicking the button in the SCP menu
(page 21), or the SCP Tools (page 22), or the SCP dock (page 25). Select the tab Sentinel-2 download (page 41).
We are searching a specific image acquired on May 06, 2016.
In Login Sentinels (page 41) enter the user name and password for accessing data (free registration is required).
WARNING : The guest/guest account is not available anymore. Free registration is required. See
https://scihub.copernicus.eu/news/News00097 .
In Search area (page 43) enter:
• UL X (Lon): 12
• UL Y (Lat): 42
• LR X (Lon): 13
• LR Y (Lat): 41
TIP : In general it is possible to define the area coordinates clicking the button and drawing
a rectangle in the map.
In Search (page 43) set:
• Date from: 2016-05-06
• to: 2016-05-06 Now click the button Find and after a few seconds the image will be listed in the Image list.
Tip: download this zip file containing the shapefile of Sentinel-2 granules for identifying the zone;
load this shapefile in QGIS, select the granules in your search area and open the attribute table to see
the zone name.
In the result table, click the item T32TQM in the field Zone, which is the Granule S2A_OPER_MSI_L1C_TL_SGS__20160506T153005_A004552_T32TQM, and click the button . A preview will be downloaded and displayed in the map, which is useful for assessing the quality of the image and the cloud cover.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.19: Search Sentinel-2 images
Fig. 5.20: Sentinel-2 search result
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
TIP : It is also possible to display the image overview (which is composed of several granules) with
the button .
Fig. 5.21: Image preview Click the tab Download options (page 44) and uncheck bands 1, 9, and 10. Also, uncheck the options Prepro-
cess images (usually this should be checked, but for the purpose of this tutorial we are going to preprocess images in the step Automatic Conversion to Surface Reflectance (page 156)) and Load bands in QGIS (because we are going to clip the images).
TIP : The option Only if preview in Layers allows for downloading only images in the result table
which are loaded as previews in the map. It is convenient to check this option and remove the images
previews in the QGIS layer list, leaving only the previews of images that one wish to download.
In order to start the image download, click the button and select a directory where bands are saved (e.g.
Desktop). The download could last a few minutes according to your internet connection speed (band size ranges from 30 to 90MB). The download progress is displayed in a bar.
After the download, all the bands and the metadata files are saved in the output directory.
Automatic Conversion to Surface Reflectance Conversion to reflectance (see Radiance and Reflectance (page 120)) can be performed automatically. The meta-
data file (a .xml file whose name contains MTD_SAFL1C) downloaded with the images contains the required information for the conversion. Read Image conversion to reflectance (page 136) for information about the Top Of Atmosphere (TOA) Reflectance (page 136) and Surface Reflectance (page 137).
In order to convert bands to reflectance, open the tab Preprocessing (page 64) clicking the button in the SCP menu (page 21), or the SCP Tools (page 22), or the SCP dock (page 25), and select the tab Sentinel-2 (page 68).
Click the button Directory containing Sentinel-2 bands and select the directory that should be named S2A_OPER_MSI_L1C_TL_SGS__20160506T153005_A004552_T32TQM. The list of bands is automati-
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.22: Selection of bands for download
Fig. 5.23: Download of Sentinel bands
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 cally loaded in the table Metadata (page 68). Also, the metadata information for each band is loaded (because the metadata file is inside the same directory).
TIP : If a Sentinel-2 image was downloaded directly from the site https://scihub.copernicus.eu and
you want to convert images to reflectance using SCP, you should copy the .xml file whose name
contains MTD_SAFL1C (included in the granule directory) and paste it inside the same directory of
bands (files .jp2).
In order to calculate Surface Reflectance (page 137) we are going to apply the DOS1 Correction (page 137);
therefore, enable the option Apply DOS1 atmospheric correction.
TIP : It is recommended to perform the DOS1 atmospheric correction to the entire image (before
clipping the image) in order to improve the calculation of parameters based on the image.
Uncheck the option Create Band set and use Band set tools because we are going to define this in the following step Create the Band Set (page 160). In order to start the conversion process, click the button and select the directory where converted bands are saved (e.g. Desktop).
Fig. 5.24: Sentinel-2 conversion to reflectance After a few minutes, converted bands are loaded and displayed (file name starts with RT_). If Play sound when finished is checked in Classification process (page 104) settings, a sound is played when the process is finished.
Clip the Data Sentinel-2 images have a large extent. In order to reduce the computational time, we are going to clip bands to the same study area as Tutorial 1: Your First Land Cover Classification (page 143). Open the tab Preprocessing
(page 64) and select the tab Clip multiple rasters (page 72).
Click the button to refresh the layer list, and check all the layers whose name starts with RT_ (the band number is at the end of the layer name).
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.25: Converted Sentinel-2 bands Click the button and select an area such as the following image, or enter the following values:
• UL X: 791810
• UL Y: 4643020
• LR X: 809750
• LR Y: 4626230
Fig. 5.26: Clip area Click the button and select a directory (e.g. clip) where clipped bands are saved (with the file name prefix defined in Output name prefix). When the process is completed, clipped rasters are loaded and displayed. We can
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 remove the bands whose names start with RT_ from QGIS layers.
Fig. 5.27: Clipped bands Create the Band Set Now we need to define the Band set which is the input image for SCP. Open the tab Band set (page 96) clicking the button in the SCP menu (page 21), or the SCP Tools (page 22), or the SCP dock (page 25).
Click the button to refresh the layer list, and check all the clipped bands; then click to add selected rasters to the Band set. In the table Band set definition order the band names in ascending order (click to sort bands by name automatically), then highlight band 8A (i.e. single click on band name in the table) and use the buttons or to place this band at number 8. Finally, select Sentinel-2 from the list Quick wavelength settings, in order to set automatically the Center wavelength of each band and the Wavelength unit (required for spectral signature calculation).
You can notice that the item << band set >> is selected as Input image (page 28) in the SCP dock (page 25).
Create the ROIs In order to collect ROIs we need to Create the Training Input File (page 145) as described in Tutorial 1: Your First Land Cover Classification (page 143) (in the SCP dock (page 25) click the button and define a file name).
The Training input stores the ROIs and the Spectral Signature (page 120) thereof.
We are going to create several ROIs using the Macroclass IDs defined in the following table (see Classes and Macroclasses (page 127)).
Macroclasses
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.28: Definition of a band set
Fig. 5.29: Band set
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.30: Definition of Training input in SCP
Macroclass name Macroclass ID
Water 1
Built-up 2
Vegetation 3
Bare soil 4 In this phase we are creating the database of spectral signatures used to identify land cover classes (the ones de-
fined as macroclasses). However, these macroclasses are composed of several materials having different spectral signatures; in order to achieve good classification results we should separate spectral signatures of different ma-
terials, even if belonging to the same macroclass. Thus, we are going to create several ROIs for each macroclass
(setting the same MC ID, but assigning a different C ID to every ROI).
In the list RGB= of Working toolbar (page 23) select 3-2-1 to display a natural color image (see ref:color_composite_definition and Sentinel-2 Satellite (page 122)). After a few seconds, the Color Composite
(page 123) will be displayed. We can see that urban areas are white and vegetation is green.
TIP : If a Band set (page 96) is defined, a temporary virtual raster (named band_set.vrt) is
created automatically, which allows for the display of Color Composite (page 123). In order to speed
up the visualization, you can show only the virtual raster and hide all the layers in the QGIS Layers.
Now in the list RGB= of the Working toolbar (page 23) type 3-7-10 (you can also use the tool RGB list
(page 62)). Using this color composite, urban areas are purple and vegetation is green. You can notice that this color composite RGB = 3-7-10 highlights roads more than natural color (RGB = 3-2-1). Also, you can see that there are clouds in the right part of the image.
Now create the ROIs following the same steps described in Create the ROIs (page 145) of Tutorial 1: Your First Land Cover Classification (page 143). After clicking the button in the Working toolbar (page 23) you should notice that the cursor in the map displays a value changing over the image. This is the NDVI value of the pixel beneath the cursor (NDVI is displayed because the function Display is checked in ROI creation (page 30)).
The NDVI value can be useful for identifying spectrally pure pixels, in fact vegetation has higher NDVI values than soil.
For instance, move the mouse over a vegetation area and left click to create a ROI when you see a local maximum value. This way, the created ROI and the spectral signature thereof will be particularly representative of healthy vegetation.
The color composite RGB = 7-3-2 is also useful for highlighting vegetation.
Create several ROIs (the more is the better). The region growing algorithm can create more homogeneous ROIs
(i.e. standard deviation of spectral signature values is low) than manually drawn ones; the manual creation of ROIs can be useful in order to account for the spectral variability of classes (especially when using the algorithm
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.31: Color composite RGB = 3-2-1
Fig. 5.32: Color composite RGB = 3-7-10
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.33: NDVI value of vegetation pixel displayed in the map Maximum Likelihood (page 129)).
In general, you should create one ROI for each color that you can distinguish in the image. Therefore, change the color composite in order to identify the different types of land cover.
TIP : Change frequently the Color Composite (page 123) in order to clearly identify the materials at
the ground; use the mouse wheel on the list RGB= of the Working toolbar (page 23) for changing the
color composite rapidly; also use the buttons and for better displaying the Input image (i.e.
image stretching).
A few examples of ROIs are illustrated in the following figures.
Fig. 5.34: Water ROI: lake
Fig. 5.35: Built-up ROI: large buildings It is worth mentioning that you can show or hide the temporary ROI clicking the button ROI in Working
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.36: Built-up ROI: road
Fig. 5.37: Built-up ROI: buildings and narrow roads
Fig. 5.38: Vegetation ROI: deciduous trees
Fig. 5.39: Vegetation ROI: crop
Fig. 5.40: Bare soil ROI: uncultivated land
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 toolbar (page 23).
TIP : Install the plugin QuickMapServices in QGIS, and add a map (e.g. OpenStreetMap) in order to
facilitate the identification of ROIs using high resolution data.
We can also try to mask clouds in the image, creating ROIs of clouds and assigning the special MC ID = 0
(which is an ID used for labelling intentionally unclassified pixels) and a different C ID. In fact spectral signatures with the MC ID = 0 are normally used in the classification, but every pixel assigned to these spectral signatures is labelled unclassified in the classification result. Therefore, this is a simple way for masking particular spectral signatures such as clouds (of course there are more advanced methods for masking clouds that will be discussed in other tutorials).
Fig. 5.41: Example of ROI for clouds Create a Classification Preview As pointed out in Tutorial 1: Your First Land Cover Classification (page 143), previews are temporary classifica-
tions that are useful for assessing the effects of spectral signatures during the ROI collection.
Set the colors of the spectral signatures in the ROI Signature list (page 29); then, in the Classification algorithm
(page 33) select the classification algorithm Maximum Likelihood (page 129). In Classification preview (page 24)
set Size = 500; click the button and then left click a point of the image in the map.
The classification preview is displayed in the map.
In order to create a classification preview using Macroclass IDs check the option MC ID in the tab Classifica-
tion algorithm (page 33) of the SCP dock (page 25). In the tab Macroclasses (page 32) of the SCP dock (page 25)
change the colors of MC ID (in the table Macroclasses (page 32) double click the color of each macroclass to choose a representative color).
Now click the button in the Working toolbar (page 23) to calculate a new preview at the same area of the previous one. In the following figure you can notice that there are fewer classes (only the MC ID) than the previous preview; also, clouds are unclassified (black pixels).
TIP : In the Working toolbar (page 23) click the button Preview to easily show or hide the
classification previews, and the button RGB= to show or hide the Input image.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.42: Example of preview using C IDs
Fig. 5.43: COlors of MC IDs
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.44: Example of preview using MC IDs Assess Spectral Signatures Spectral signatures are used by Classification Algorithms (page 128) for labelling image pixels. Different materials may have similar spectral signatures (especially considering multispectral images) such as built-up and soil. If spectral signatures used for classification are too similar, pixels could be misclassified because the algorithm is unable to discriminate correctly those signatures. Thus, it is useful to assess the Spectral Distance (page 133)
of signatures to find similar spectral signatures that must be removed. Of course the concept of distance vary according to the algorithm used for classification.
One can simply assess spectral signature similarity by displaying a signature plot. In order to display the signature plot, in the ROI Signature list (page 29) highlight two or more spectral signatures (with click in the table), then click the button . The Spectral Signature Plot (page 107) is displayed in a new window. Move ans zoom inside the Plot (page 109) to see if signatures are similar (i.e. very close). We can see in the following figure a signature plot of different materials.
In the plot we can see the line of each signature (with the color defined in the ROI Signature list (page 29)), and the spectral range (minimum and maximum) of each band (i.e. the semi-transparent area colored like the signature line). The larger is the semi-transparent area of a signature, the higher is the standard deviation, and therefore the heterogeneity of pixels that composed that signature. Spectral signature values are displayed in the Signature details (page 109).
Additionally, we can calculate the spectral distances of signatures (for more information see Spectral Distance
(page 133)). Highlight two or more spectral signatures with click in the table Plot Signature list (page 108), then click the button ; distances will be calculated for each pair of signatures. Now open the tab Spectral distances
(page 111); we can notice that similarity between signatures vary according to considered algorithm.
For instance, two signatures can be very similar Spectral Angle Mapping (page 130) (very low Spectral Angle
(page 134)), but quite distant for the Maximum Likelihood (page 129) (Jeffries-Matusita Distance (page 133)
value near 2). The similarity of signatures is affected by the similarity of materials (in relation to the number of spectral bands available in the Input image); also, the way we create ROIs influences the signatures.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.45: Spectral plot
Fig. 5.46: Spectral signature values
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.47: Spectral distances Create the Classification Output Repeat iteratively the phases Create the ROIs (page 160), Create a Classification Preview (page 166), and Assess Spectral Signatures (page 168) until the classification previews show good results.
To start the classification of the entire image, open the tab Classification output (page 34), click the button and define the name of the classification output.
TIP : Set the Available RAM (MB) in RAM (page 104) settings, in order to reduce the computational
time; the recommended value is half of the system RAM.
If Play sound when finished is checked in Classification process (page 104) settings, a sound is played when the process is finished.
It is worth mentioning that SCP provides other tools and techniques that can improve the classification results,
which are described in Thematic Tutorials (page 173).
5.3 NASA ARSET Webinar NASA ARSET is a program for fostering the acquisition and use of NASA satellite data for supporting decisions,
through online webinars and in-person workshops.
NASA ARSET offered the webinar Land Cover Classification with Satellite Imagery which covered very in-
teresting objectives such as access and download Landsat imagery and learn the basic steps for performing a supervised classifications using the SCP.
This webinar is organized in two sessions:
• Introduction to Land Cover Classification and QGIS
• Improving a Supervised Land Cover Classification
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 5.48: Classification
Fig. 5.49: NASA ARSET website
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 The entire webinar is very informative, and I recommend watching the recordings. In particular, the Land Cover Signature Classification (page 131) is illustrated during the exercise of the second session.
Slides of presentations, in English and Spanish, and the recordings of both sessions are freely available at this link https://arset.gsfc.nasa.gov/land/webinars/advanced-land-classification.
Many thanks to NASA ARSET for their effort in teaching remote sensing using open source software.
After these tutorials, please check the Thematic Tutorials (page 173) .
CHAPTER 6
Thematic Tutorials The following are thematic tutorials. Before these tutorials, it is recommended to read the Basic Tutorials
(page 143).
6.1 Tutorial: Land Cover Signature Classification
• Create the Band Set (page 173)
• Create the ROIs and Define the Spectral Thresholds (page 175)
• Land Cover Classification (page 179)
• Other Tutorials (page 181)
This tutorial is about the Land Cover Signature Classification (page 131). It is assumed that one has the basic knowledge of SCP and Basic Tutorials (page 143).
Following the video of this tutorial.
http://www.youtube.com/watch?v=wUr5ZjpWBo0 First download the sample image from this link (© Copernicus Sentinel data 2016) which is a Sentinel-2 image,
and unzip the file.
6.1.1 Create the Band Set Open the tab Band set (page 96), click the button and select the bands of the downloaded Sentinel-2 image. In the table Band set definition order the band names in ascending order (click to sort bands by name automatically), then highlight band 8A (i.e. single click on band name in the table) and use the buttons or
to place this band at number 8. Finally, select Sentinel-2 from the list Quick wavelength settings, in order to set automatically the Center wavelength of each band and the Wavelength unit (required for spectral signature calculation).
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.1: Band set definition
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 6.1.2 Create the ROIs and Define the Spectral Thresholds In the SCP dock (page 25) click the button and define a file name for the Training input. We are going to create ROIs similarly to Tutorial 2: Land Cover Classification of Sentinel-2 Images (page 153).
We are going to use the following Macroclass IDs (see Classes and Macroclasses (page 127)).
Macroclasses
Macroclass name Macroclass ID
Water 1
Built-up 2
Vegetation 3
Bare soil 4 In addition, we can mask clouds in the image, creating ROIs of clouds and assigning the special MC ID = 0.
In the list RGB= of Working toolbar (page 23) define a Color Composite (page 123) such as RGB = 3-2-1 or RGB = 7-3-2.
Fig. 6.2: Color composite Now create some ROIs. ROIs are used in Land Cover Signature Classification (page 131) for defining a spectral region. The Land Cover Signature Classification (page 131) can use additional classification algorithms for pixels falling inside overlapping regions or outside any spectral region (in this tutorial we are going to use Minimum Distance (page 129)), therefore it is important that ROIs are homogeneous in order to train correctly the additional algorithm. Following the ROI creation we are going to change the signature thresholds in the LCS threshold
(page 60).
Fig. 6.3: ROI creation After the ROI creation, in the ROI Signature list (page 29) highlight these spectral signatures, then click the button
.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.4: Signature plot Spectral signatures are displayed with the respective colors; also, the semi-transparent area represents the spectral range of each ROI. The minimum and maximum values of these spectral range are displayed in the Plot Signature list (page 108). You can manually edit these ranges or use the tools Automatic thresholds (page 108). It is worth noticing the same spectral ranges (of spectral signatures in ROI Signature list (page 29)) are displayed in the Signature threshold (page 59).
In Classification algorithm (page 33) select Use LCS to use the in Land Cover Signature Classification
(page 131). Now create a classification preview over the lake (see Create a Classification Preview (page 166)).
Fig. 6.5: Classification preview You can see that several pixels are unclassified (black) because they are outside any spectral range. In the Plot Signature list (page 108) highlight a signature of macroclass Water and click the button From pixel . This tool allows you to extend the spectral range to include a pixel signature. Click an unclassified pixel in the map over the lake; you should see that the spectral range of highlighted signature is larger now. Click the button in the Working toolbar (page 23).
Now the area classified as water is larger and should include the pixel that was clicked before. Create a temporary ROI over the unclassified area of the lake and click the button From ROI .
This way, the spectral range is extended to include the minimum and maximum value of this ROI for each band.
Creating another classification preview we can see that the classified area is extended according to the temporary ROI.
You can extend the spectral range to classify the whole lake as water.
TIP : During ROI creation, click the button in Working toolbar (page 23) and right click on the
map for displaying the spectral signature of a pixel in the Spectral Signature Plot (page 107). This
can be useful for assessing unclassified pixels and extend one or more spectral ranges.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.6: Classification preview
Fig. 6.7: Signature plot: the spectral range is extended
Fig. 6.8: Signature plot: the spectral range is extended
Fig. 6.9: Classification preview 6.1. Tutorial: Land Cover Signature Classification 177
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Particular attention should be posed on the spectral similarity of classes. For instance soil and built-up can have very similar spectral signatures. Therefore, several ROIs should be collected in the attempt to separate these classes.
Spectral ranges should not overlap in order to avoid unclassified pixels. In the following figure, two signatures have overlapping ranges (it means that potentially there is a signature whose values fall in two classes); these signatures are highlighted in orange in the Plot Signature list (page 108) (also in the LC Signature threshold
(page 60)) and the combinations MC ID - C ID of overlapping signatures are displayed in the column Color
[overlap MC_ID-C_ID].
Fig. 6.10: Overlapping signatures It is possible to reduce the range with the button From ROI or From pixel if the checkbox – is checked. In this case, the range is reduced to exclude the values of selected pixels or ROIs.
In addition, it is possible to edit the range directly from the plot. In the Plot Signature list (page 108) highlight a signature, click the button , then click inside the plot to extend or reduce the range. As a general procedure,
you should compare spectral signatures and identify one or more values that could separate the overlapping ranges
(if spectral ranges are not overlapping at least in one band then classes are correctly separated).
In case two spectral regions belonging to different classes are overlapping, you should consider reducing the ranges, collecting other spectral signatures with reduced ranges, or extending the spectral range of one signature to include the range of the other spectral signature that will be deleted. For instance, it could be convenient to create two spectral ranges (with two spectral signatures) for the same class in order to easily separate a third spectral signature whose values are comprised between the minimum and maximum values of the other two ranges.
TIP : Check the Automatic plot to display automatically the plot of a temporary ROI in the
Spectral Signature Plot (page 107), and assess the spectral range before saving the ROI.
Fig. 6.11: The plot of a temporary ROI Now check MC ID in Classification algorithm (page 33). When MC ID is checked, the classification is performed using all the spectral signatures (without any modification of original spectral values) but assigning
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 the macroclass code. Moreover, only overlapping signatures belonging to different macroclasses are highlighted in Plot Signature list (page 108). This allows spectral signatures sharing the same MC ID to be overlapping.
Fig. 6.12: Overlapping regions belonging to the same MC ID Also, open the tab LCS threshold (page 60) for checking the overlap of all the spectral signatures saved in the Training input.
6.1.3 Land Cover Classification After the creation of several ROIs and the definition of spectral ranges, we can perform the classification for the whole image.
Having selected MC ID and LCS in Classification algorithm (page 33), click the button in the Classification output (page 34) and select an output destination. After the processing, the classification will be displayed in QGIS.
Unclassified pixels, displayed in black, are pixels whose spectral signature is not completely contained in any spectral region. Also, pixels contained in more than one spectral region (having different MC ID) are classified as Class Overlap.
We could create other spectral regions in order to classify all the unclassified pixels. Alternatively, we can use the selected Algorithm (page 33) for classifying those pixels. Check the Algorithm in Land Cover Signature Classification (page 34) and select the Minimum Distance (page 129) in Algorithm (page 33); then click the button
in the Classification output (page 34).
Pixels that were unclassified by LCS now are classified using the Minimum Distance (page 129), which compares calculates the Euclidean distance between pixels and spectral signatures. Black pixels are clouds classified using the special MC ID = 0.
In addition, we can use the Minimum Distance (page 129) to classify only pixels that were labelled Class Overlap by LCS, leaving unclassified pixels whose spectral signature is not completely contained in any spectral region.
Check only overlap in Land Cover Signature Classification (page 34), leaving checked Algorithm; then
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.13: LCS threshold. Overlapping regions are highlighted in orange
Fig. 6.14: LCS classification
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.15: LCS classification. Class Overlap
Fig. 6.16: LCS classification. Classification using the additional classification algorithm click the button in the Classification output (page 34).
Fig. 6.17: LCS classification. Classification using the additional classification algorithm only for Class Overlap The Land Cover Signature Classification (page 131) can be useful for the classification of a single land cover class, defining only the spectral ranges that identify our objective. For instance, if we were interested in built-up classification only, we could collect only ROIs for this class, obtaining a classification such as in the following image.
6.1.4 Other Tutorials For other tutorials visit the blog From GIS to Remote Sensing .
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.18: LCS classification. Classification of the class Built-up 6.2 Tutorial: Estimation of Land Surface Temperature with Landsat
and ASTER
• Data Download and Conversion (page 183)
• Clip to Study Area (page 185)
• Land Cover Classification (page 188)
• Reclassification of Land Cover Classification to Emissivity Values (page 190)
• Conversion from At-Satellite Temperature to Land Surface Temperature (page 190)
• Data Download and Conversion of ASTER Image (page 193)
• Clip to Study Area of ASTER image (page 195)
• Land Cover Classification of ASTER Image (page 196)
• Reclassification of Land Cover Classification to Emissivity Values of ASTER Image (page 198)
• Conversion from At Satellite Temperature to Land Surface Temperature of ASTER Image (page 199)
• Other Tutorials (page 201)
This tutorial is about the estimation land surface temperature using Landsat Satellite (page 120) and ASTER Satellite (page 122) images. In this tutorial we are going to use a land cover classification for the definition of surface emissivity, which is required for the calculation of the land surface temperature. It is assumed that one has the basic knowledge of SCP and Basic Tutorials (page 143).
Our study area will be Paris (France), an area covered by urban surfaces, vegetation and agricultural fields.
Before downloading data, please watch the following video that illustrates the study area and provides very useful information about thermal infrared images, and their application (footage courtesy of European Space Agency/ESA). Also, a brief description of the area that we are going to classify is available here .
http://www.youtube.com/watch?v=Vjg5REQb-Bc The thermal infrared band is particularly useful for assessing the temperature difference between the city and the surrounding rural areas, and studying the urban heat island phenomenon. We are going to use Landsat and ASTER images for the estimation of land surface temperature. For more information about the conversion of raster bands please read Conversion to At-Satellite Brightness Temperature (page 140). Following the video of this tutorial.
http://www.youtube.com/watch?v=7W4IwlvPLbQ
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 6.2.1 Data Download and Conversion We are going to download the Landsat 8 image acquired in 2015 (image ID = LC81990262015270LGN00,
data available from the U.S. Geological Survey).
Start a new QGIS project. Open the tab Download images (page 37) clicking the button in the SCP menu
(page 21), or the SCP Tools (page 22), or the SCP dock (page 25). Select the tab Landsat download (page 37).
In Login https://ers.cr.usgs.gov/ (page 37) you should enter the user name and password for accessing data (free registration at USGS EROS is required) in User and Password. However, in this case login should not be required because this Landsat 8 image is available directly from the Amazon Web Services (AWS) .
In Search area (page 39) enter:
• UL X (Lon): 2
• UL Y (Lat): 49
• LR X (Lon): 2.5
• LR Y (Lat): 48.8
TIP : In general it is possible to define the area coordinates clicking the button and drawing
a rectangle in the map.
In Search (page 39) select L8 OLI/TIRS from the list Satellites and set the acquisition date:
• Date from: 2015-09-27
• to: 2015-09-27 Now click the button Find and after a few seconds the image will be listed in the Image list.
In the result table, click the item LC81990262015270LGN00 in the field ImageID, and click the button .
A preview will be downloaded and displayed in the map, which is useful for assessing the quality of the image and the cloud cover.
Click the tab Download options (page 40) and leave checked only the following bands:
• 2 = Blue
• 3 = Green
• 4 = Red
• 5 = Near-Infrared
• 6 = Short Wavelength Infrared 1
• 7 = Short Wavelength Infrared 2
• 10 = Thermal Infrared (TIRS) 1 Bands from 2 to 7 will be used for the land cover classification, and band 10 for the estimation of land surface temperature (see Why using only Landsat 8 band 10 in the estimation of surface temperature? (page 212)).
The checkbox Preprocess images allows for the automatic conversion of bands after the download, according to the settings defined in Landsat (page 64); we are going to apply the DOS1 Correction (page 137). Bands from 2 to 7 will be converted to reflectance and band 10 will be converted to At-Satellite Brightness Temperature. Open the tab Landsat (page 64), check Apply DOS1 atmospheric correction and uncheck Create Band set and use Band set tools (we are going to create the Band set after the clip of the image to study area).
6.2. Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER 183
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.19: Landsat search result
Fig. 6.20: Image preview
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.21: Selection of bands for download In order to start the download and conversion process, open the tab Landsat download (page 37), click the button
and select the directory where converted bands are saved (e.g. Desktop). After a few minutes, converted bands are loaded and displayed (file name starts with RT_).
6.2.2 Clip to Study Area We are going to clip the Landsat images to our study area.
Open the tab Preprocessing (page 64) clicking the button in the SCP menu (page 21), or the SCP Tools
(page 22), or the SCP dock (page 25). Select the tab Clip multiple rasters (page 72) and click the button to refresh the layer list and show the loaded rasters. Click the button to select all the rasters to be clipped, and in Clip coordinates (page 72) type the following values:
• UL X: 402705
• UL Y: 5461065
• LR X: 480824
• LR Y: 5381535 Now click the button and select the directory where clipped bands are saved (e.g. Desktop). Clipped bands have the prefix clip_ and will be automatically loaded and displayed. We can remove the bands whose names start with RT_ from QGIS layers.
6.2. Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER 185
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.22: Conversion settings
Fig. 6.23: Converted Landsat bands
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.24: Clip area
Fig. 6.25: Clipped bands 6.2. Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER 187
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 6.2.3 Land Cover Classification Now we need to classify land cover, which will be used later for the creation of the emissivity raster. For detailed instructions about the classification process please see Tutorial 2: Land Cover Classification of Sentinel-2 Images
(page 153).
We are going to use the following Macroclass IDs (see Classes and Macroclasses (page 127)).
Macroclasses
Macroclass name Macroclass ID
Water 1
Built-up 2
Vegetation 3
Bare soil 4 Open the tab Band set (page 96) clicking the button and define the Landsat 8 Band set using clipped bands from 2 to 7 (excluding band 10).
Fig. 6.26: Band set In the SCP dock (page 25) click the button , define a file name for the Training input. In the list RGB= of Working toolbar (page 23) select 4-3-2 to display a false color composite corresponding to the bands: Near-
Infrared, Red, and Green (see Color Composite (page 123)).
After the creation of several ROIs for each land cover class, we can perform the classification of the whole image
(see Tutorial 2: Land Cover Classification of Sentinel-2 Images (page 153)). After setting the colors of MC ID (in the tab Macroclasses (page 32) of the SCP dock (page 25)), in the tab Classification algorithm (page 33) check the option MC ID to use Macroclass IDs and select the classification algorithm Maximum Likelihood (page 129).
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.27: Color composite
Fig. 6.28: Classification algorithm 6.2. Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER 189
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Then, open the tab Classification output (page 34), click the button and define the name of the classification output.
Fig. 6.29: Land cover classification 6.2.4 Reclassification of Land Cover Classification to Emissivity Values Now we are going to reclassify the classification raster using the land surface emissivity values. The emissivity
(e) values for the land cover classes are provided in the following table (values used in this tutorial are only indicative, because emissivity of every material should be obtained from field survey):
Emissivity values
Land surface Emissivity e
Water 0.98
Built-up 0.94
Vegetation 0.98
Bare soil 0.93 Open the tab Postprocessing (page 78) clicking the button in the SCP menu (page 21), or the SCP Tools
(page 22), or the SCP dock (page 25). Select the tab Reclassification (page 86) and click the button to refresh the layer list and show the loaded rasters. Select the classification raster from the list.
Click the button to add 4 rows to the table Values. In this table, set the old value (the Macroclass ID of the classification) and the new value (the corresponding emissivity e ) for every land cover class. Uncheck the checkbox Use code from Signature list, click the button and define the name of the output raster (e.g.
emissivity.tif).
This is the emissivity raster, where each pixel has the emissivity value that we have defined for the respective land cover class.
6.2.5 Conversion from At-Satellite Temperature to Land Surface Temperature Now we are ready to convert the At-Satellite Brightness Temperature to Land Surface Temperature, using the following equation (see Estimation of Land Surface Temperature (page 140)):
𝑇 = 𝑇𝐵 /[1 + (𝜆 * 𝑇𝐵 /𝑐2 ) * 𝑙𝑛(𝑒)]
where:
• 𝜆 = wavelength of emitted radiance
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.30: Reclassification
Fig. 6.31: Emissivity raster 6.2. Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER 191
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• 𝑐2 = ℎ * 𝑐/𝑠 = 1.4388 * 10−2 m K = 14388 µm K
• ℎ = Planck’s constant = 6.626 * 10−34 J s
• 𝑠 = Boltzmann constant = 1.38 * 10−23 J/K
• 𝑐 = velocity of light = 2.998 * 108 m/s The values of 𝜆 for Landsat bands are listed in the following table.
Center wavelength of Landsat bands
Satellite Band 𝜆(𝑚)
Landsat 4, 5, and 7 6 11.45
Landsat 8 10 10.8
Landsat 8 11 12 Open the tab Band calc (page 92) clicking the button in the SCP menu (page 21), or the SCP Tools (page 22),
or the SCP dock (page 25). Click the button to refresh the layer list and show the loaded rasters. We have used band 10 of Landsat 8, therefore in the Expression (page 93) type the equation for conversion adapted to our rasters:
"clip_RT_LC81990262015270LGN00_B10.tif" / ( 1 + ( 10.8 * "clip_RT_
˓→LC81990262015270LGN00_B10.tif" / 14388 ) * ln("emissivity.tif") )
Fig. 6.32: Calculation of surface temperature Click the button and define the name of the output raster (e.g. surface_temperature.tif). After the calculation, the Land Surface Temperature (in kelvin) will be loaded, and we can change the layer style. In addition, in the tab Band calc (page 92) we can calculate the temperature in Celsius with the expression:
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
"surface_temperature.tif" - 273.15
Fig. 6.33: Land Surface Temperature of the Landsat Image We can notice that the urban area and uncultivated land have the highest temperatures, while vegetation has the lowest temperature. The aim of this tutorial is to describe a methodology for the estimation of surface temperature using open source programs and free images. It is worth highlighting that in order to achieve more accurate results, one should perform field survey for improving the land cover classification and the estimation of surface emissivities.
In addition to Landsat, we are going to use an ASTER image and use the same methodology for the estimation of Land Surface Temperature.
6.2.6 Data Download and Conversion of ASTER Image Open the tab Download images (page 37) and select the tab ASTER download (page 46).
In Login https://urs.earthdata.nasa.gov (page 46) enter the user name and password required for accessing data
(free registration at EOSDIS Earthdata is required) in User and Password. The ASTER L1T data products are retrieved from the online Data Pool, courtesy of the NASA Land Processes Distributed Active Archive Center
(LP DAAC), USGS/Earth Resources Observation and Science (EROS) Center, Sioux Falls, South Dakota, https:
//lpdaac.usgs.gov/data_access/data_pool.
In Search area (page 47) enter:
• UL X (Lon): 2
• UL Y (Lat): 49
• LR X (Lon): 2.5
• LR Y (Lat): 48.8 In Search (page 47) set the acquisition date:
• Date from: 2000-08-24
• to: 2000-08-24 Now click the button Find and after a few seconds the image will be listed in the Image list.
In the result table, click the item AST_L1T_00308242000111313_20150411071856_3805 in the field ImageID, and click the button . A preview will be downloaded and displayed in the map, which is useful for assessing the quality of the image and the cloud cover.
As we did for Landsat, we are going to apply the DOS1 Correction (page 137). The checkbox Preprocess images allows for the automatic conversion of bands after the download, according to the settings defined in ASTER (page 69). Bands from 1 to 9 will be converted to reflectance and bands from 10 to 14 will be converted to At-Satellite Brightness Temperature.
6.2. Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER 193
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.34: ASTER search result
Fig. 6.35: ASTER image preview
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Open the tab ASTER (page 69), check Apply DOS1 atmospheric correction and leave checked Create Band set and use Band set tools (this is useful to automatiacally create a Band set, which is required for the next step).
Fig. 6.36: Conversion settings In order to start the download and conversion process, open the tab ASTER download (page 46), click the button
and select the directory where converted bands are saved (e.g. Desktop). After a few minutes, converted bands are loaded and displayed (file name starts with RT_).
Fig. 6.37: Converted ASTER bands 6.2.7 Clip to Study Area of ASTER image We are going to clip the ASTER images to our study area, because bands are not aligned at the border.
6.2. Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER 195
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Open the tab Preprocessing (page 64) clicking the button . Select the tab Clip multiple rasters (page 72) and click the button to refresh the layer list and show the loaded rasters. Band set‘Click the button |select_all| to select all the rasters to be clipped, and check |checkbox| :guilabel:‘Use temporary ROI for clipping. Now, we can draw a manual ROI (because a Band set is already defined, see ROI creation (page 30)) about the same shape of the ASTER image, about 20 pixels within the border thereof (in order to align the border of all the bands).
Now click the button and select the directory where clipped bands are saved (e.g. Desktop). Clipped bands have the prefix clip_ and will be automatically loaded and displayed. We can remove the bands whose names start with RT_ from QGIS layers.
Fig. 6.39: Clipped bands 6.2.8 Land Cover Classification of ASTER Image Using the same Macroclass IDs used for Landsat, we are going to classify the ASTER image.
Open the tab Band set (page 96) clicking the button . Click tha button to clear all bands from Band set and define the ASTER Band set using the clipped bands from 1 to 9.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.40: ASTER band set 6.2. Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER 197
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 In the SCP dock (page 25) click the button , define a file name for the Training input (e.g.
training_ASTER). Clear the table ROI Signature list (page 29) highlighting all the spectral signatures cre-
ated previously for Landsat and clicking the button .
In the list RGB= of Working toolbar (page 23) select 3-2-1 to display a false color composite corresponding to the bands: Near-Infrared, Red, and Green (see Color Composite (page 123)).
Fig. 6.41: ASTER color composite After the creation of several ROIs for each land cover class, we can perform the classification of the whole image. After setting the colors of MC ID (in the tab Macroclasses (page 32) of the SCP dock (page 25)), in the tab Classification algorithm (page 33) check the option MC ID to use Macroclass IDs and select the classification algorithm Maximum Likelihood (page 129). Then, open the tab Classification output (page 34), click the button
and define the name of the classification output (e.g. classification_aster.tif).
Fig. 6.42: ASTER land cover classification 6.2.9 Reclassification of Land Cover Classification to Emissivity Values of
ASTER Image Now we are going to reclassify the classification raster using the same land surface emissivity values used for Landsat.
Open the tab Postprocessing (page 78) clicking the button . Select the tab Reclassification (page 86) and click the button to refresh the layer list and show the loaded rasters. Select the classification raster from the list.
Click the button to add 4 rows to the table Values. In this table, set the old value (the Macroclass ID of the classification) and the new value (the corresponding emissivity e ) for every land cover class. Uncheck the
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 checkbox Use code from Signature list, click the button and define the name of the output raster (e.g.
emissivity_aster.tif).
Fig. 6.43: ASTER reclassification The following figure show the emissivity raster of ASTER image.
Fig. 6.44: ASTER emissivity raster 6.2.10 Conversion from At Satellite Temperature to Land Surface Temperature
of ASTER Image We can convert the At-Satellite Brightness Temperature to Land Surface Temperature, using the same equation used for Landsat (see Estimation of Land Surface Temperature (page 140)).
The values of 𝜆 for ASTER bands are listed in the following table.
6.2. Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER 199
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Center wavelength of ASTER bands
Satellite Band 𝜆(𝑚)
ASTER 10 8.3
ASTER 11 8.65
ASTER 12 9.1
ASTER 13 10.6
ASTER 14 11.3 We are going to use ASTER band 13 that has a 𝜆 value very similar to the Landsat band 10.
Open the tab Band calc (page 92) clicking the button . Click the button to refresh the layer list and show the loaded rasters, and in the Expression (page 93) type equation for conversion adapted to our rasters:
"clip_RT_AST_L1T_00308242000111313_20150411071856_3805_13.tif" / ( 1 + ( 10.6 *
˓→"clip_RT_AST_L1T_00308242000111313_20150411071856_3805_13.tif" / 14388 ) * ln(
˓→"emissivity_aster.tif") )
Fig. 6.45: Calculation of surface temperature Click the button and define the name of the output raster (e.g. surface_temperature_aster.tif).
After the calculation, the Land Surface Temperature (in kelvin) will be loaded, and we can change the layer style.
In addition, in the tab Band calc (page 92) we can calculate the temperature in Celsius with the expression:
"surface_temperature_aster.tif" - 273.15 The ASTER image shows temperature values higher than the Landsat image. For instance, we could perform the difference between the two surface temperature rasters (Landsat and ASTER) to assess the variation of tempera-
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
Fig. 6.46: Land Surface Temperature of the ASTER Image ture. However, we should notice that the two images were acquired in different months (Landsat on 27-09-2015 and ASTER on 24-08-2000).
The large availability of Landsat and ASTER images for the past decades allows for the reliable monitoring of land cover and surface temperature. Nevertheless, cloud cover can limit the number of images that can be effectively used.
This tutorial illustrated a methodology of temperature estimation using these satellite images and open source pro-
grams. One should always consider that the estimation accuracy depends on several factors, such as the thematic and spatial accuracy of land cover classifications and the reliability of the emissivity values. Estimation errors can be of 1 K or even more. Other methods have been developed which can provide more accurate results, and the reader can continue the research.
6.2.11 Other Tutorials For other tutorials visit the blog From GIS to Remote Sensing .
Also, visit the blog From GIS to Remote Sensing for other tutorials such as:
• Flood Monitoring Using The Semi-Automatic Classification Plugin;
• Wildfire Monitoring Using The Semi-Automatic Classification Plugin;
• From Image Download to NDVI Calculation in One Move: SCP Batch;
For other unofficial tutorials, also in languages other than English, see Other tutorials about SCP, also in languages other than English? (page 217).
6.2. Tutorial: Estimation of Land Surface Temperature with Landsat and ASTER 201
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
CHAPTER 7
Semi-Automatic OS The Semi-Automatic OS is a lightweight virtual machine for the land cover classification of remote sensing im-
ages. It includes the Semi-Automatic Classification Plugin (SCP) for QGIS, already configured along with all the required dependencies, and installed through the official SCP repository ( https://semiautomaticgit.github.io/
SemiAutomaticClassificationPlugin/repository.xml ) which provides always the latest version of SCP.
Fig. 7.1: Semi-Automatic OS desktop The Semi-Automatic OS is based on Debian , and it is designed to require very little hardware resources. It uses LXDE and Openbox as main desktop environment. This virtual machine can be useful for testing the Semi-
Automatic Classification Plugin, or when the installation of the required programs in the host system is problem-
atic.
The Semi-Automatic OS is available as a 32 bit and 64 bit virtual machine that can be run in the open source Vir-
tualBox, or any other virtualization program. The following is a guide for the installation of the Semi-Automatic OS in the open source program of virtualization VirtualBox.
7.1 Installation in VirtualBox
1. Download VirtualBox open source software (select a proper version depending on your OS) and install it;
at the end of the installation restart the system;
2. Download the Semi-Automatic OS virtual machine (about 800 MB) from here (32 bit or 64 bit);
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
3. Extract the virtual machine content in a directory (it requires about 3 GB of disk space); the file is com-
pressed in 7z format (if needed, download the open source extraction software from http://www.7-zip.org/);
4. Run VirtualBox and create a new Debian virtual machine;
(a) Click the New button;
(b) Type a name for the virtual machine (for instance Semi-Automatic OS); select Linux and Debian (32
or 64 bit) as Type and Version respectively; click Next;
(c) Set the memory size; the more is the better, but this parameter should not exceed a half of the host
system RAM (for instance if the host system has 1 GB of RAM, type 512 MB); click Next;
(d) In the Hard drive settings select Use an existing virtual hard drive file and select the downloaded file
SemiAutomaticOS.vmdk; click Create;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
5. Start the Semi-Automatic OS by clicking the Start button;
6. It is recommended to install the virtualbox-guest-utils in the virtual machine, from the Menu > Preferences
> Synaptic Package Manager; it allows for a better integration of the Semi-Automatic OS in the host system,
such as: the resize of the system window, or the folder sharing.
The Semi-Automatic OS includes a sample dataset of Landsat image (available from the U.S. Geological Survey)
and a Sentinel-2 image (© Copernicus Sentinel data 2016) which are the input for the two basic tutorials.
Semi-Automatic OS is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 3 of the License. Semi-Automatic OS is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
See http://www.gnu.org/licenses/.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
CHAPTER 8
Frequently Asked Questions If you have comments or questions please join the Facebook group or the Google+ Community .
Before asking, please check the official site From GIS to Remote Sensing and the following Frequently Asked Questions.
• Plugin installation (page 208)
– How to install the plugin manually? (page 208)
– How to install the plugin from the official SCP repository? (page 208)
• Pre processing (page 210)
– Which image bands should I use for a semi-automatic classification? (page 210)
– Which Landsat bands can be converted to reflectance by the SCP? (page 210)
– Can I apply the conversion to Sentinel-2 images download from the web? (page 211)
– How are converted Sentinel-2 images that have different resolutions? (page 211)
– Can I apply the Landsat conversion and DOS correction to clipped bands? (page 211)
– Can I apply the DOS correction to bands with black border (i.e. with NoData value)? (page 211)
– How to remove cloud cover from images? (page 211)
– How do I create a virtual raster manually in QGIS? (page 211)
– After pan-sharpening of Landsat 8 images, why NIR bands still have 30m resolution? (page 212)
• Processing (page 212)
– I get classification errors. How can I improve the accuracy? (page 212)
– Is it possible to use the same training input for multiple images? (page 212)
– What is the difference between classes and macroclasses? (page 212)
– Can I use SCP with images from drones or aerial photographs? (page 212)
– Why using only Landsat 8 band 10 in the estimation of surface temperature? (page 212)
• Warnings (page 213)
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
– Warning [12]: The following signature will be excluded if using Maximum Likelihood. Why?
(page 213)
• Errors (page 213)
– How can I report an error? (page 213)
– Virtual raster creation issues. Why? (page 214)
– Error [26] ‘The version of Numpy is outdated’. Why? (page 214)
– Error ‘Plugin is damaged. Python said: ascii’. Why? (page 215)
– Error [50] ‘Internet error’. Unable to download Sentinel-2 images. Why? (page 215)
– Error [56] ‘SSL connection error’. Unable to download Sentinel-2 images. Why? (page 215)
– This plugin is broken ‘matplotlib requires pyparsing >= 1.5.6’. Why? (page 216)
– Error installing the plugin, possible missing dependencies. Why? (page 216)
• Various (page 216)
– What can I do with the SCP? (page 216)
– How to contribute to SCP (page 217)
– Free and valuable resources about remote sensing and GIS (page 217)
– Other tutorials about SCP, also in languages other than English? (page 217)
– How can I translate this user manual to another language? (page 218)
– Where is the source code of SCP? (page 219)
8.1 Plugin installation 8.1.1 How to install the plugin manually?
The SCP can be installed manually (this can be useful when an internet connection is not available, or the instal-
lation is required on multiple computers), following a few steps:
1. download the SCP zip archive from https://github.com/semiautomaticgit/
SemiAutomaticClassificationPlugin/archive/master.zip ;
2. extract the content of the archive (several files such as COPYING.txt and folders such as ui) in a new
folder named SemiAutomaticClassificationPlugin (without -master);
3. open the QGIS plugins directory (in Windows usually C:\Users\username\.
qgis2\python\plugins, in Linux and Mac usually /home/username/.qgis2/python/
plugins/) and delete the folder SemiAutomaticClassificationPlugin if present;
4. copy the folder SemiAutomaticClassificationPlugin inside the QGIS plugins directory;
5. the plugin should be installed; start QGIS, open the Plugin Manager and be sure that Semi-Automatic
Classification Plugin is checked.
8.1.2 How to install the plugin from the official SCP repository?
It is possible to install the SCP using the official repository. This repository allows for the installation of the latest version of SCP (master), in some cases also before the availability thereof in the QGIS repository. Therefore,
this can be useful if you need a fix or a new function that is still not available in the QGIS repository. Moreover, the master version in the SCP repository can be installed along with the version available in the QGIS repository.
In order to install the SCP repository follow these steps:
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Run QGIS 2;
• From the main menu, select Plugins > Manage and Install Plugins;
• Click Settings then click the button Add;
• Inside the Repository details enter:
Name:
SCP URL:
https://semiautomaticgit.github.io/SemiAutomaticClassificationPlugin/repository.xml and click OK;
• After the repository update, the item Semi-Automatic Classification Plugin - master
should be listed with the other plugins;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• From the menu All, select the Semi-Automatic Classification Plugin - master and
click the button Install plugin; the latest version of SCP should be automatically activated (ignore
errors, the restart of QGIS could be necessary to complete the SCP installation); it is possible to deactivate
the other SCP installed in the QGIS repository;
8.2 Pre processing 8.2.1 Which image bands should I use for a semi-automatic classification?
In general, it is preferable to avoid thermal infrared bands. If you are using Landsat 4, 5 or 7 you should select bands: 1, 2, 3, 4, 5, 7 avoiding band 6 that is thermal infrared; for Landsat 8 you should select bands: 2, 3, 4, 5,
6, 7. Landsat 8 band 1 is generally avoided because it is very similar to the blue band and it is mainly used for coastal aerosol study. Landsat thermal infrared band is excluded from classifications because values are mainly related to object temperature.
For Sentinel-2 images you can use bands: 2, 3, 4, 5, 6, 7, 8, 8A, 11, 12.
8.2.2 Which Landsat bands can be converted to reflectance by the SCP?
All Landsat 1,2, and 3 MSS and Landsat 4, 5, 7, and 8 images downloaded from http://earthexplorer.usgs.gov/
and processed with the Level 1 Product Generation System (LPGS) can be converted to reflectance automatically by the SCP; products generated by the LPGS have a MTL file included that is required for the conversion. Since version 3.1.1 the SCP can also convert images from the Global Land Cover Facility (images available for free
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 from ftp://ftp.glcf.umd.edu/glcf/Landsat/). In particular, images having an old format of the MTL file (or a .met file) can be processed through the automatic conversion to reflectance and the DOS correction. However, some images do not have the required information and cannot be processed. Also, notice that some images available from the Global Land Cover Facility are already converted to reflectance. For this process, image bands must be renamed in order to remove the final 0 if present (e.g. rename B10 to B1).
8.2.3 Can I apply the conversion to Sentinel-2 images download from the web?
Yes, you can convert also images downloaded from the web (actually the conversion is recommended). You should move all the bands (.jp2 files) and if available the .xml file whose name contains MDT_SAFL1C in the same directory. Then select this directory in Sentinel-2 conversion (page 68). Images are converted to reflectance.
8.2.4 How are converted Sentinel-2 images that have different resolutions?
During the conversion to reflectance, pixels of 20m bands are split in 4 pixels of 10m whose values are the same as the original 20m pixel. The purpose of this operation is to allow for the calculation between all the bands, without changing original values.
8.2.5 Can I apply the Landsat conversion and DOS correction to clipped bands?
Yes, you can clip the images before the conversion to reflectance and then copy the MTL file (contained in the Landsat dataset) inside the directory with the clipped bands. If you want to apply the DOS correction (which is an image based technique) you should convert the original Landsat bands (the entire image) and then clip the conversion output (i.e. bands converted to reflectance).
8.2.6 Can I apply the DOS correction to bands with black border (i.e. with NoData
value)?
If you want to apply the DOS correction to an entire band which has NoData values (the black border with value
= 0) then you have to check the checkbox Use NoData value and set the value to 0. This is because DOS is an image based technique, and NoData values must be excluded from the calculation.
8.2.7 How to remove cloud cover from images?
DOS1 correction does not remove clouds from the image. However, Landsat 8 images include Band 9 that identi-
fies clouds (see this NASA site). You can use this band for the creation of a mask.
For other Landsat satellites, clouds can be masked using the approach described this paper.
Also, see the following video-tutorial.
8.2.8 How do I create a virtual raster manually in QGIS?
In order to create a multi-spectral virtual raster in QGIS:
1. from the menu Raster select Miscellaneous > Build Virtual Raster (catalog);
2. click the button Select... and select all the Landsat bands (in numerical order);
3. select the output file (for instance rgb.vrt); check Separate (bands will be separated) and click OK.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 8.2.9 After pan-sharpening of Landsat 8 images, why NIR bands still have 30m
resolution?
Landsat 8 panchromatic band doesn’t acquire in the Near Infrared (NIR) region (see Landsat Satellite (page 120)).
Therefore, the pan-sharpening process can’t improve the resolution of NIR and SWIR bands (see Pan-sharpening
(page 125)), which appear to have 30m resolution. However, raster all pan-sharpened rasters have 15m resolution to allow raster calculation.
8.3 Processing 8.3.1 I get classification errors. How can I improve the accuracy?
Several materials have similar spectral signatures (e.g. soil and built-up, or forest and other types of dense low vegetation), which can cause classification errors if ROIs, and spectral signatures thereof, are not acquired cor-
rectly. In order to improve the results, you can try to collect more ROIs over these areas, in order to train the algorithm for these very similar areas, also, display the spectral signatures of these areas in Spectral Signature Plot (page 107) to assess their similarity. You can also use a Signature threshold (page 59) for these signatures in order to reduce the variability thereof (only pixels very similar to the input signatures will be classified). The Land Cover Signature Classification (page 131) is also useful for classifying specific materials that can be spectrally similar to other ones.
8.3.2 Is it possible to use the same training input for multiple images?
Yes, it is possible if all the images have the same number of bands. However, if images are acquired in different months, land cover changes (especially of vegetation state) will affect the spectral signature (i.e. the same pixel has different spectral signature in different periods). Atmospheric effects could also affect the images differently. That could reduce classification accuracy. Therefore, it is suggested to collect always the ROIs and spectral signatures for every image.
8.3.3 What is the difference between classes and macroclasses?
Please see Classes and Macroclasses (page 127).
8.3.4 Can I use SCP with images from drones or aerial photographs?
Yes, you can use them if they have at least 4 bands. With less than 4 bands, semi-automatic classification al-
gorithms are unable to classify the land cover correctly. Alternative classification methods exist, such as object oriented classification, which is not implemented in SCP.
8.3.5 Why using only Landsat 8 band 10 in the estimation of surface tempera-
ture?
Several methods were developed for estimating surface temperature. The method described in the tutorial for temperature estimation requires only one band. Moreover, USGS recommends that users refrain from relying on Landsat 8 Band 11 data in quantitative analysis of the Thermal Infrared Sensor data (see Changes to Thermal Infrared Sensor (TIRS) data by USGS).
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 8.4 Warnings 8.4.1 Warning [12]: The following signature will be excluded if using Maximum
Likelihood. Why?
The ROI is too small (or too homogeneous) for the Maximum Likelihood (page 129) algorithm because that ROI has a singular covariance matrix. You should create larger ROIs or don’t use the Maximum Likelihood algorithm in the classification process.
8.5 Errors 8.5.1 How can I report an error?
If you found an error of the Semi-Automatic Classification Plugin please follow these steps in order to collect the required information (log file):
1. close QGIS if already open;
2. open QGIS, open the Plugin tab Debug (page 106) and check the checkbox Records events in a log file
;
Fig. 8.1: Debug
3. click the button Test dependencies in the tab Debug (page 106) ;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
4. load the data in QGIS (or open a previously saved QGIS project) and repeat all the steps that cause the error in the Pl
• if the issue could be related to the image data, please use this sample dataset ;
5. if an error message appears (like the one in the following image), copy the whole content of the message in
a text file;
Fig. 8.2: Error message
6. open the tab Debug (page 106) and uncheck the checkbox Records events in a log file, then click the
button and save the log file (which is a text file containing information about the Plugin processes);
7. open the log file and copy the whole content of the file;
8. join the Facebook group or the Google+ community , create a new post and copy the error message and the
log file (or attach them).
8.5.2 Virtual raster creation issues. Why?
The automatic creation of the virtual raster after Landsat conversion to reflectance is not required for the classi-
fication. Errors could happen if the output destination path contains special characters (such as accented letters)
or spaces; try to rename directories (e.g. rename new directory to new_directory). If you still get the same error you can create a virtual raster manually.
8.5.3 Error [26] ‘The version of Numpy is outdated’. Why?
QGIS 32bit could have an older version of Numpy as default; in order to update Numpy:
1. download this file (which is based on WinPython installer and PyParsing);
2. extract the file with 7-zip;
3. copy the content of the extracted directory inside the directory
apps\Python27\Lib\site-packages inside the QGIS installation directory (e.g. C:\Program
Files (x86)\QGIS Chugiak\apps\Python27\Lib\site-packages) overwriting the files
pyparsing, numpy, matplotlib, and scipy.
Alternatively, you should be able to install QGIS and Numpy with the OSGEO4W advanced installer.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 8.5.4 Error ‘Plugin is damaged. Python said: ascii’. Why?
It could be related to a wrong installation. Please, uninstall QGIS and install it again with administrative rights.
Delete also the directory .qgis2 in your user directory. Then run QGIS 2 and try to install the plugin following the Plugin Installation (page 3) guide.
Also, it could be related to the user name containing special characters. Please try the installation creating a new user without special characters (e.g. user).
Also, if the error message contains something like: sfnt4 = sfnt4.decode('ascii').lower()
it could be related to a known issue of Matplotlib (a Python library); in order to solve this, you should (as reported at stackoverflow):
1. open in a text editor the file font_manager.py which is inside the directory
C:\PROGRA~1\QGISCH~1\apps\Python27\lib\site-packages\matplotlib\
2. search for the line sfnt4 = sfnt4.decode('ascii').lower()
3. and replace it with the line sfnt4 = sfnt4.decode('ascii', 'ignore').lower()
Alternatively, try to install QGIS through the OSGEO4W installer, which includes an updated Matplotlib version.
8.5.5 Error [50] ‘Internet error’. Unable to download Sentinel-2 images. Why?
The error message usually includes some information about the issue. First, check the user name and password.
Also, there could be an interruption of the service. For Sentinel-2 images please check this website https://scihub.
copernicus.eu/news/ for messages about the state of the service.
In case you still get the same error, please follow these steps How can I report an error? (page 213).
8.5.6 Error [56] ‘SSL connection error’. Unable to download Sentinel-2 images.
Why?
First, check the user name and password.
This issue could be related to SSL protocols (TLS v1.1 and TLS v1.2) required for Sentinel-2 download. As described here https://docs.python.org/2/library/ssl.html the protocols TLS v1.1 and TLS v1.2 are available only in Python 2.7.9+ with openssl version 1.0.1+. QGIS could have a previous version of Python where TLS v1.1 and TLS v1.2 are not available. Therefore the Sentinel-2 download process fails.
A temporary solution for Windows OS:
Warning: this could break other QGIS functions, but fortunately you can install multiple versions of
QGIS.
1. Close QGIS if open
2. Download and install Python for 32bit or for 64bit according to the installed version of QGIS
3. Copy and replace C:\python27\python.exe to "QGIS installation folder"\bin\ (e.g.
C:\Program Files (x86)\QGIS Chugiak\bin\)
4. Copy and replace C:\python27\pythonw.exe to "QGIS installation folder"\bin\
5. Copy and replace all the content of C:\python27\ to "QGIS installation
folder"\apps\python27\
6. Now start QGIS and if everything went well you should be able to search and download Sentinel-2 images
using SCP In case you still get the same error, please follow these steps How can I report an error? (page 213).
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 8.5.7 This plugin is broken ‘matplotlib requires pyparsing >= 1.5.6’. Why?
It is related to this issue https://hub.qgis.org/issues/14952 which should affect QGIS 32bit only. The installation of QGIS 64bit should work. As a solution you can install a previous version of QGIS 2.8 32bit .
8.5.8 Error installing the plugin, possible missing dependencies. Why?
The plugin requires the installation of GDAL, NumPy, SciPy and Matplotlib, which should be installed along with QGIS. If the plugin installation fails, and you get a message about possible missing dependencies, you should try to install or update QGIS and the required dependencies. Notice that in order to avoid this error, python dependencies should not be installed through Anaconda.
8.6 Various 8.6.1 What can I do with the SCP?
SCP allows for the land cover classification of remote sensing images through Supervised Classification
(page 126). You can produce a land cover raster using one of the Classification Algorithms (page 128) available in SCP. These algorithms require spectral signatures or ROIs as input (for definitions please read Brief Introduction to Remote Sensing (page 117)) that define the land cover classes to be identified in the image.
Fig. 8.3: A multispectral image processed to produce a land cover classification
(Landsat image provided by USGS)
SCP can work with multispectral images acquired by satellites, airplanes, or drones. Also, SCP allows for the direct search and download of free images (see Download images (page 37)). You cannot use orthophotos with less than 4 bands, SAR data, and LIDAR data with SCP.
Input image in SCP is called Band set (page 96), which is used as input for the classification. SCP provides several tools for the Preprocessing (page 64) of downloaded images, such as the conversion to reflectance and manipulation of bands.
Classification results can be assessed with the tools Accuracy (page 78) and Classification report (page 81).
Also, rasters can be manipulated using Postprocessing (page 78) tools such as Classification to vector (page 83),
Reclassification (page 86), Edit raster (page 86) directly, Classification sieve (page 88), Classification erosion
(page 90), and Classification dilation (page 91).
The Spectral Signature Plot (page 107) and Scatter Plot (page 112) allow for the analysis of spectral signatures and ROIs. Also, several Tools (page 51) are available for easing the ROI creation and editing spectral signatures.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 Raster calculation is available through the seamless integration of the tool Band calc (page 92) with bands in the Band set (page 96), calculating mathematical expressions and spectral indices. Also, an output raster can be calculated based on Decision rules (page 94).
The tool Batch (page 98) allows for the automatic execution of several SCP functions using a scripting interface.
See the Basic Tutorials (page 143) for more information and examples.
8.6.2 How to contribute to SCP You can contribute to SCP by fixing and adding functionalities (see Where is the source code of SCP? (page 219)),
or translating the user manual (see How can I translate this user manual to another language? (page 218)).
Also, you can donate to this project at the following link https://fromgistors.blogspot.com/p/donations.html .
8.6.3 Free and valuable resources about remote sensing and GIS The following links are valuable resources:
• The Landsat 8 Data Users Handbook by USGS;
• The Landsat 7 Science Data Users Handbook by NASA;
• Remote Sensing Note by JARS.
• Webinar: Fundamentals of Remote Sensing by NASA.
• Webinar: NASA Remote Sensing for Land Management by NASA.
• Webinar: Creating and Using Normalized Difference Vegetation Index (NDVI) from Satellite Imagery by
NASA.
• Webinar: Remote Sensing of Forest Cover and Change Assessment for Carbon Monitoring by NASA.
• Webinar: Introduction to Remote Sensing for Conservation Management by NASA.
8.6.4 Other tutorials about SCP, also in languages other than English?
There are several tutorials about SCP on the internet. Following an incomplete list of these resources:
• French: Suivre l’impact des feux de forêts par imagerie satellite avec le plugin Qgis SCP;
• German: 2015 Jakob Erfassung von Landnutzungsveränderungen mit FOSS Image Processing Tools;
• Italian: Classificazione e Mosaico di Varie Immagini Landsat;
• Korean: QGIS Semi-Automatic Classification Plugin;
• Portuguese: Classificacao supervisionada de imagens Sentinel-2 com QGIS e SCP;
• Portuguese: Avaliação do erro de uma imagem de satélite usando o QGIS e o SCP;
• Portuguese: Conversão Sentinel-2 para refletância com QGIS SCP;
• Portuguese: Criar composições coloridas no QGIS com SCP;
• Portuguese: Corte de imagem Sentinel-2 usand QGIS e SCP;
• Portuguese: Classificação Supervisionada de Imagens Orbitais com o Semi-Automatic Classification Plu-
gin;
• Portuguese: Tutorial Classificação e caracterização de imagens de satélites;
• Portuguese: Aprendizagem Supervisionada usando o SCP no QGIS;
• Portuguese: Classificação supervisionada utilizando o QGIS e SCP;
• Russian: Landsat Semi-Automatic Classification Plugin QGIS;
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1
• Spanish: Ejercicio Clasificación Semiautomática Plugin (SCP);
• Spanish: Aplicaciones de Teledetección con el QGIS y el plugin Semi-Automatic Classification;
• Spanish: Descarga de Landsat 8, 7, 5 y 4 Semi Automatic Classification Plugin Qgis 2.8;
• Swedish: Landsat 8 och fjärranalys med QGIS;
• Ukrainian: Semi-Automatic Classification 5.0;
• Ukrainian: ;
8.6.5 How can I translate this user manual to another language?
It is possible to easily translate the user manual to any language, because it is written in reStructuredText as markup language (using Sphinx). Therefore, your contribution is fundamental for the translation of the manual to your language. The following guide illustrates the main steps for the translation, which can be performed:
• using the free online service Transifex;
• using the gettext .po files.
Before translating, please read this document from the QGIS translation guide, which helps you understand the reStructuredText.
Method 1. Translation using the free online service Transifex This is probably the easiest way to translate the manual using an online service.
1. Join the Semi-automatic Classification Manual project
Go to the page https://www.transifex.com/semi-automatic-classification/
semi-automatic-classification-plugin-manual and click the button Help translate.
You can sign in using your Google or Facebook account, or with a free registration.
2. Select your language
Select your language and click the button Join team. If your language is not listed, click the
button Request language.
3. Translation
There are several files to be translated, which refer to the sections of the
SCP documentation. To translate the SCP interface you should select the file
semiautomaticclassificationplugin.ts .
Method 2. Translation using the gettext .po files In order to use this method, you should be familiar with GitHub. This translation method allows for the translation of the PO files locally.
1. Download the translation files
Go to the GitHub project https://github.com/semiautomaticgit/
SemiAutomaticClassificationManual_v4/tree/master/locale and download the .po files of
your language (you can add your language, if it is not listed), or you can fork the repository.
Every file .po is a text file that refers to a section of the User Manual.
2. Edit the translation files
Now you can edit the .po files. It is convenient to edit those file using one of the following
programs: for instance Poedit for Windows and Mac OS X, or Gtranslator for Linux or OmegaT
(Java based) for Windows, Linux and Mac OS X. These editors allow for an easy translation of
every sentence in the User Manual.
Semi-Automatic Classification Plugin Documentation, Release 5.3.6.1 8.6.6 Where is the source code of SCP?
You can find the source code of SPC is at the following link https://github.com/semiautomaticgit/
SemiAutomaticClassificationPlugin |
tf-encrypted | readthedoc | Unknown | tf-encrypted 0.5.9 documentation
[tf-encrypted](index.html#document-index)
---
TF Encrypted API Docs[¶](#tf-encrypted-api-docs)
===
TF Encrypted is a framework for encrypted machine learning in [TensorFlow](https://www.tensorflow.org). It looks and feels like TensorFlow, taking advantage of the ease-of-use of the Keras API while enabling training and prediction over encrypted data. Under the hood, TF Encrypted integrates state-of-the-art cryptography like [secure multi-party computation](https://en.wikipedia.org/wiki/Secure_multi-party_computation), and [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption). TF Encrypted aims to make privacy-preserving machine learning readily available, without requiring expertise in cryptography, distributed systems, or high performance computing.
TF Encrypted focuses on:
* **Usability**: The API and its underlying design philosophy make it easy to get started, use, and integrate privacy-preserving technology into pre-existing machine learning processes.
* **Extensibility**: The architecture supports and encourages experimentation and benchmarking of new cryptographic protocols and machine learning algorithms.
* **Performance**: Optimizing for tensor-based applications and relying on TensorFlow’s backend means runtime performance comparable to that of specialized stand-alone frameworks.
* **Community**: With a primary goal of pushing the technology forward the project encourages collaboration and open source over proprietary and closed solutions.
* **Security**: Cryptographic protocols are evaluated against strong notions of security and known limitations are highlighted.
This page only contains API documentation. Checkout the [examples](https://github.com/tf-encrypted/tf-encrypted/tree/master/examples) on github to learn how to get up and running with private machine learning.
You can view the project source, contribute, and asks questions on [GitHub](https://github.com/tf-encrypted/tf-encrypted).
License[¶](#license)
---
This project is licensed under the Apache License, Version 2.0 (see [License](https://github.com/mortendahl/tf-encrypted/blob/master/LICENSE)). Copyright as specified in the [NOTICE](https://github.com/mortendahl/tf-encrypted/blob/master/NOTICE) contained in the code base. |
pygsp | readthedoc | Matlab | PyGSP 0.5.1 documentation
[PyGSP](#)
---
PyGSP: Graph Signal Processing in Python[¶](#pygsp-graph-signal-processing-in-python)
===
| |
| |
The PyGSP is a Python package to ease
[Signal Processing on Graphs](https://arxiv.org/abs/1211.0053).
It is a free software, distributed under the BSD license, and available on [PyPI](https://pypi.python.org/pypi/PyGSP).
The documentation is available on
[Read the Docs](https://pygsp.readthedocs.io)
and development takes place on
[GitHub](https://github.com/epfl-lts2/pygsp).
(A [Matlab counterpart](https://lts2.epfl.ch/gsp) exists.)
The PyGSP facilitates a wide variety of operations on graphs, like computing their Fourier basis, filtering or interpolating signals, plotting graphs,
signals, and filters. Its core is spectral graph theory, and many of the provided operations scale to very large graphs. The package includes a wide range of graphs, from point clouds like the Stanford bunny and the Swiss roll;
to networks like the Minnesota road network; to models for generating random graphs like stochastic block models, sensor networks, Erdős–Rényi model,
Barabási-Albert model; to simple graphs like the path, the ring, and the grid.
Many filter banks are also provided, e.g. various wavelets like the Mexican hat, Meyer, Half Cosine; some low-pass filters like the heat kernel and the exponential window; and Gabor filters. Despite all the pre-defined models, you can easily use a custom graph by defining its adjacency matrix, and a custom filter bank by defining a set of functions in the spectral domain.
The following demonstrates how to instantiate a graph and a filter, the two main objects of the package.
```
>>> from pygsp import graphs, filters
>>> G = graphs.Logo()
>>> G.estimate_lmax()
>>> g = filters.Heat(G, tau=100)
```
Let’s now create a graph signal: a set of three Kronecker deltas for that example. We can now look at one step of heat diffusion by filtering the deltas with the above defined filter. Note how the diffusion follows the local structure!
```
>>> import numpy as np
>>> DELTAS = [20, 30, 1090]
>>> s = np.zeros(G.N)
>>> s[DELTAS] = 1
>>> s = g.filter(s)
>>> G.plot_signal(s, highlight=DELTAS, backend='matplotlib')
```
You can
[try it online](https://mybinder.org/v2/gh/epfl-lts2/pygsp/master?filepath=playground.ipynb),
look at the
[tutorials](https://pygsp.readthedocs.io/en/stable/tutorials/index.html)
to learn how to use it, or look at the
[reference guide](https://pygsp.readthedocs.io/en/stable/reference/index.html)
for an exhaustive documentation of the API. Enjoy the package!
Installation[¶](#installation)
---
The PyGSP is available on PyPI:
```
$ pip install pygsp
```
Note that you will need a recent version of `pip` and `setuptools`. Please run `pip install --upgrade pip setuptools` if you get any installation error.
Contributing[¶](#contributing)
---
See the guidelines for contributing in `CONTRIBUTING.rst`.
Acknowledgments[¶](#acknowledgments)
---
The PyGSP was started in 2014 as an academic open-source project for research purpose at the [EPFL LTS2 laboratory](https://lts2.epfl.ch).
This project has been partly funded by the Swiss National Science Foundation under grant 200021_154350 “Towards Signal Processing on Graphs”.
If you are using the library for your research, for the sake of reproducibility, please cite the version you used as indexed by
[Zenodo](https://doi.org/10.5281/zenodo.1003157).
### Tutorials[¶](#tutorials)
The following are some tutorials which explain how to use the toolbox and show how to use it to solve some problems.
#### Introduction to the PyGSP[¶](#introduction-to-the-pygsp)
This tutorial will show you the basic operations of the toolbox. After installing the package with pip, start by opening a python shell, e.g.
a Jupyter notebook, and import the PyGSP. We will also need NumPy to create matrices and arrays.
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from pygsp import graphs, filters, plotting
```
We then set default plotting parameters. We’re using the `matplotlib` backend to embed plots in this tutorial. The `pyqtgraph` backend is best suited for interactive visualization.
```
>>> plotting.BACKEND = 'matplotlib'
>>> plt.rcParams['figure.figsize'] = (10, 5)
```
##### Graphs[¶](#graphs)
Most likely, the first thing you would like to do is to create a [graph](https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)). In this toolbox, a graph is encoded as an adjacency, or weight, matrix. That is because it’s the most efficient representation to deal with when using spectral methods. As such, you can construct a graph from any adjacency matrix as follows.
```
>>> rs = np.random.RandomState(42) # Reproducible results.
>>> W = rs.uniform(size=(30, 30)) # Full graph.
>>> W[W < 0.93] = 0 # Sparse graph.
>>> W = W + W.T # Symmetric graph.
>>> np.fill_diagonal(W, 0) # No self-loops.
>>> G = graphs.Graph(W)
>>> print('{} nodes, {} edges'.format(G.N, G.Ne))
30 nodes, 60 edges
```
The [`pygsp.graphs.Graph`](index.html#pygsp.graphs.Graph) class we just instantiated is the base class for all graph objects, which offers many methods and attributes.
Given, a graph object, we can test some properties.
```
>>> G.is_connected()
True
>>> G.is_directed()
False
```
We can retrieve our weight matrix, which is stored in a sparse format.
```
>>> (G.W == W).all()
True
>>> type(G.W)
<class 'scipy.sparse.lil.lil_matrix'>
```
We can access the [graph Laplacian](https://en.wikipedia.org/wiki/Laplacian_matrix)
```
>>> # The graph Laplacian (combinatorial by default).
>>> G.L.shape
(30, 30)
```
We can also compute and get the graph Fourier basis (see below).
```
>>> G.compute_fourier_basis()
>>> G.U.shape
(30, 30)
```
Or the graph differential operator, useful to e.g. compute the gradient or smoothness of a signal.
```
>>> G.compute_differential_operator()
>>> G.D.shape
(60, 30)
```
Note
Note that we called [`pygsp.graphs.Graph.compute_fourier_basis()`](index.html#pygsp.graphs.Graph.compute_fourier_basis) and
[`pygsp.graphs.Graph.compute_differential_operator()`](index.html#pygsp.graphs.Graph.compute_differential_operator) before accessing the Fourier basis [`pygsp.graphs.Graph.U`](index.html#pygsp.graphs.Graph.U) and the differential operator [`pygsp.graphs.Graph.D`](index.html#pygsp.graphs.Graph.D). Doing so is however not mandatory as those matrices would have been computed when requested (lazy evaluation).
Omitting to call the *compute* functions does print a warning to tell you that a potentially heavy computation is taking place under the hood (that’s also the reason those matrices are not computed when the graph object is instantiated). It is thus encouraged to call them so that you are aware of the involved computations.
To be able to plot a graph, we need to embed its nodes in a 2D or 3D space.
While most included graph models define these coordinates, the graph we just created do not. We only passed a weight matrix after all. Let’s set some coordinates with [`pygsp.graphs.Graph.set_coordinates()`](index.html#pygsp.graphs.Graph.set_coordinates) and plot our graph.
```
>>> G.set_coordinates('ring2D')
>>> G.plot()
```
While we created our first graph ourselves, many standard models of graphs are implemented as subclasses of the Graph class and can be easily instantiated.
Check the [`pygsp.graphs`](index.html#module-pygsp.graphs) module to get a list of them and learn more about the Graph object.
##### Fourier basis[¶](#fourier-basis)
As in classical signal processing, the Fourier transform plays a central role in graph signal processing. Getting the Fourier basis is however computationally intensive as it needs to fully diagonalize the Laplacian. While it can be used to filter signals on graphs, a better alternative is to use one of the fast approximations (see `pygsp.filters.Filter.filter()`). Let’s compute it nonetheless to visualize the eigenvectors of the Laplacian.
Analogous to classical Fourier analysis, they look like sinuses on the graph.
Let’s plot the second and third eigenvectors (the first is constant). Those are graph signals, i.e. functions \(s: \mathcal{V} \rightarrow \mathbb{R}^d\)
which assign a set of values (a vector in \(\mathbb{R}^d\)) at every node
\(v \in \mathcal{V}\) of the graph.
```
>>> G = graphs.Logo()
>>> G.compute_fourier_basis()
>>>
>>> fig, axes = plt.subplots(1, 2, figsize=(10, 3))
>>> for i, ax in enumerate(axes):
... G.plot_signal(G.U[:, i+1], vertex_size=30, ax=ax)
... _ = ax.set_title('Eigenvector {}'.format(i+2))
... ax.set_axis_off()
>>> fig.tight_layout()
```
The parallel with classical signal processing is best seen on a ring graph,
where the graph Fourier basis is equivalent to the classical Fourier basis.
The following plot shows some eigenvectors drawn on a 1D and 2D embedding of the ring graph. While the signals are easier to interpret on a 1D plot, the 2D plot best represents the graph.
```
>>> G2 = graphs.Ring(N=50)
>>> G2.compute_fourier_basis()
>>> fig, axes = plt.subplots(1, 2, figsize=(10, 4))
>>> G2.plot_signal(G2.U[:, 4], ax=axes[0])
>>> G2.set_coordinates('line1D')
>>> G2.plot_signal(G2.U[:, 1:4], ax=axes[1])
>>> fig.tight_layout()
```
##### Filters[¶](#filters)
To filter signals on graphs, we need to define filters. They are represented in the toolbox by the `pygsp.filters.Filter` class. Filters are usually defined in the spectral domain. Given the transfer function
\[g(x) = \frac{1}{1 + \tau x},\]
let’s define and plot that low-pass filter:
```
>>> tau = 1
>>> def g(x):
... return 1. / (1. + tau * x)
>>> g = filters.Filter(G, g)
>>>
>>> fig, ax = plt.subplots()
>>> g.plot(plot_eigenvalues=True, ax=ax)
>>> _ = ax.set_title('Filter frequency response')
```
The filter is plotted along all the spectrum of the graph. The black crosses are the eigenvalues of the Laplacian. They are the points where the continuous filter will be evaluated to create a discrete filter.
Note
You can put multiple functions in a list to define a [filter bank](https://en.wikipedia.org/wiki/Filter_bank)!
Note
The [`pygsp.filters`](index.html#module-pygsp.filters) module implements various standard filters.
Let’s create a graph signal and add some random noise.
```
>>> # Graph signal: each letter gets a different value + additive noise.
>>> s = np.zeros(G.N)
>>> s[G.info['idx_g']-1] = -1
>>> s[G.info['idx_s']-1] = 0
>>> s[G.info['idx_p']-1] = 1
>>> s += rs.uniform(-0.5, 0.5, size=G.N)
```
We can now try to denoise that signal by filtering it with the above defined low-pass filter.
```
>>> s2 = g.filter(s)
>>>
>>> fig, axes = plt.subplots(1, 2, figsize=(10, 3))
>>> G.plot_signal(s, vertex_size=30, ax=axes[0])
>>> _ = axes[0].set_title('Noisy signal')
>>> axes[0].set_axis_off()
>>> G.plot_signal(s2, vertex_size=30, ax=axes[1])
>>> _ = axes[1].set_title('Cleaned signal')
>>> axes[1].set_axis_off()
>>> fig.tight_layout()
```
While the noise is largely removed thanks to the filter, some energy is diffused between the letters. This is the typical behavior of a low-pass filter.
So here are the basics for the PyGSP. Please check the other tutorials and the reference guide for more. Enjoy!
Note
Please see the review article [The Emerging Field of Signal Processing on Graphs: Extending High-Dimensional Data Analysis to Networks and Other Irregular Domains](https://arxiv.org/abs/1211.0053) for an overview of the methods this package leverages.
#### Introduction to spectral graph wavelets[¶](#introduction-to-spectral-graph-wavelets)
This tutorial will show you how to easily construct a [wavelet](https://en.wikipedia.org/wiki/Wavelet) frame, a kind of filter bank, and apply it to a signal. This tutorial will walk you into computing the wavelet coefficients of a graph, visualizing filters in the vertex domain, and using the wavelets to estimate the curvature of a 3D shape.
As usual, we first have to import some packages.
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from pygsp import graphs, filters, plotting, utils
```
Then we can load a graph. The graph we’ll use is a nearest-neighbor graph of a point cloud of the Stanford bunny. It will allow us to get interesting visual results using wavelets.
```
>>> G = graphs.Bunny()
```
Note
At this stage we could compute the Fourier basis using
[`pygsp.graphs.Graph.compute_fourier_basis()`](index.html#pygsp.graphs.Graph.compute_fourier_basis), but this would take some time, and can be avoided with a Chebychev polynomials approximation to graph filtering. See the documentation of the
`pygsp.filters.Filter.filter()` filtering function and
[hammond2011wavelets] for details on how it is down.
##### Simple filtering: heat diffusion[¶](#simple-filtering-heat-diffusion)
Before tackling wavelets, let’s observe the effect of a single filter on a graph signal. We first design a few heat kernel filters, each with a different scale.
```
>>> taus = [10, 25, 50]
>>> g = filters.Heat(G, taus)
```
Let’s create a signal as a Kronecker delta located on one vertex, e.g. the vertex 20. That signal is our heat source.
```
>>> s = np.zeros(G.N)
>>> DELTA = 20
>>> s[DELTA] = 1
```
We can now simulate heat diffusion by filtering our signal s with each of our heat kernels.
```
>>> s = g.filter(s, method='chebyshev')
```
And finally plot the filtered signal showing heat diffusion at different scales.
```
>>> fig = plt.figure(figsize=(10, 3))
>>> for i in range(g.Nf):
... ax = fig.add_subplot(1, g.Nf, i+1, projection='3d')
... G.plot_signal(s[:, i], colorbar=False, ax=ax)
... title = r'Heat diffusion, $\tau={}$'.format(taus[i])
... _ = ax.set_title(title)
... ax.set_axis_off()
>>> fig.tight_layout()
```
Note
The `pygsp.filters.Filter.localize()` method can be used to visualize a filter in the vertex domain instead of doing it manually.
##### Visualizing wavelets atoms[¶](#visualizing-wavelets-atoms)
Let’s now replace the Heat filter by a filter bank of wavelets. We can create a filter bank using one of the predefined filters, such as
`pygsp.filters.MexicanHat` to design a set of [Mexican hat wavelets](https://en.wikipedia.org/wiki/Mexican_hat_wavelet).
```
>>> g = filters.MexicanHat(G, Nf=6) # Nf = 6 filters in the filter bank.
```
Then plot the frequency response of those filters.
```
>>> fig, ax = plt.subplots(figsize=(10, 5))
>>> g.plot(ax=ax)
>>> _ = ax.set_title('Filter bank of mexican hat wavelets')
```
Note
We can see that the wavelet atoms are stacked on the low frequency part of the spectrum. A better coverage could be obtained by adapting the filter bank with `pygsp.filters.WarpedTranslates` or by using another filter bank like `pygsp.filters.Itersine`.
We can visualize the atoms as we did with the heat kernel, by filtering a Kronecker delta placed at one specific vertex.
```
>>> s = g.localize(DELTA)
>>>
>>> fig = plt.figure(figsize=(10, 2.5))
>>> for i in range(3):
... ax = fig.add_subplot(1, 3, i+1, projection='3d')
... G.plot_signal(s[:, i], ax=ax)
... _ = ax.set_title('Wavelet {}'.format(i+1))
... ax.set_axis_off()
>>> fig.tight_layout()
```
##### Curvature estimation[¶](#curvature-estimation)
As a last and more applied example, let us try to estimate the curvature of the underlying 3D model by only using spectral filtering on the nearest-neighbor graph formed by its point cloud.
A simple way to accomplish that is to use the coordinates map \([x, y, z]\)
and filter it using the above defined wavelets. Doing so gives us a 3-dimensional signal
\([g_i(L)x, g_i(L)y, g_i(L)z], \ i \in [0, \ldots, N_f]\)
which describes variation along the 3 coordinates.
```
>>> s = G.coords
>>> s = g.filter(s)
```
The curvature is then estimated by taking the \(\ell_1\) or \(\ell_2\)
norm across the 3D position.
```
>>> s = np.linalg.norm(s, ord=2, axis=1)
```
Let’s finally plot the result to observe that we indeed have a measure of the curvature at different scales.
```
>>> fig = plt.figure(figsize=(10, 7))
>>> for i in range(4):
... ax = fig.add_subplot(2, 2, i+1, projection='3d')
... G.plot_signal(s[:, i], ax=ax)
... title = 'Curvature estimation (scale {})'.format(i+1)
... _ = ax.set_title(title)
... ax.set_axis_off()
>>> fig.tight_layout()
```
#### Optimization problems: graph TV vs. Tikhonov regularization[¶](#optimization-problems-graph-tv-vs-tikhonov-regularization)
##### Description[¶](#description)
Modern signal processing often involves solving an optimization problem. Graph signal processing (GSP) consists roughly of working with linear operators defined by a graph (e.g., the graph Laplacian). The setting up of an optimization problem in the graph context is often then simply a matter of identifying which graph-defined linear operator is relevant to be used in a regularization and/or fidelity term.
This tutorial focuses on the problem of recovering a label signal on a graph from subsampled and noisy data, but most information should be fairly generally applied to other problems as well.
```
>>> import numpy as np
>>> from pygsp import graphs, plotting
>>>
>>> # Create a random sensor graph
>>> G = graphs.Sensor(N=256, distribute=True, seed=42)
>>> G.compute_fourier_basis()
>>>
>>> # Create label signal
>>> label_signal = np.copysign(np.ones(G.N), G.U[:, 3])
>>>
>>> G.plot_signal(label_signal)
```
The first figure shows a plot of the original label signal, that we wish to recover, on the graph.
```
>>> rs = np.random.RandomState(42)
>>>
>>> # Create the mask
>>> M = rs.rand(G.N)
>>> M = (M > 0.6).astype(float) # Probability of having no label on a vertex.
>>>
>>> # Applying the mask to the data
>>> sigma = 0.1
>>> subsampled_noisy_label_signal = M * (label_signal + sigma * rs.standard_normal(G.N))
>>>
>>> G.plot_signal(subsampled_noisy_label_signal)
```
This figure shows the label signal on the graph after the application of the subsampling mask and the addition of noise. The label of more than half of the vertices has been set to \(0\).
Since the problem is ill-posed, we will use some regularization to reach a solution that is more in tune with what we expect a label signal to look like. We will compare two approaches, but they are both based on measuring local differences on the label signal. Those differences are essentially an edge signal: to each edge we can associate the difference between the label signals of its associated nodes. The linear operator that does such a mapping is called the graph gradient \(\nabla_G\), and, fortunately for us, it is available under the [`pygsp.graphs.Graph.D`](index.html#pygsp.graphs.Graph.D) (for differential) attribute of any [`pygsp.graphs`](index.html#module-pygsp.graphs) graph.
The reason for measuring local differences comes from prior knowledge: we assume label signals don’t vary too much locally. The precise measure of such variation is what distinguishes the two regularization approaches we’ll use.
The first one, shown below, is called graph total variation (TV) regularization. The quadratic fidelity term is multiplied by a regularization constant \(\gamma\) and its goal is to force the solution to stay close to the observed labels \(b\). The \(\ell_1\) norm of the action of the graph gradient is what’s called the graph TV. We will see that it promotes piecewise-constant solutions.
\[\operatorname*{arg\,min}_x \|\nabla_G x\|_1 + \gamma \|Mx-b\|_2^2\]
The second approach, called graph Tikhonov regularization, is to use a smooth (differentiable) quadratic regularizer. A consequence of this choice is that the solution will tend to have smoother transitions. The quadratic fidelity term is still the same.
\[\operatorname*{arg\,min}_x \|\nabla_G x\|_2^2 + \gamma \|Mx-b\|_2^2\]
##### Results and code[¶](#results-and-code)
For solving the optimization problems we’ve assembled, you will need a numerical solver package. This part is implemented in this tutorial with the [pyunlocbox](https://github.com/epfl-lts2/pyunlocbox), which is based on proximal splitting algorithms. Check also its [documentation](https://pyunlocbox.readthedocs.io) for more information about the parameters used here.
We start with the graph TV regularization. We will use the [`pyunlocbox.solvers.mlfbf`](https://pyunlocbox.readthedocs.io/en/stable/reference/solvers.html#pyunlocbox.solvers.mlfbf) solver from [`pyunlocbox`](https://pyunlocbox.readthedocs.io/en/stable/reference/index.html#module-pyunlocbox). It is a primal-dual solver, which means for our problem that the regularization term will be written in terms of the dual variable \(u = \nabla_G x\), and the graph gradient \(\nabla_G\) will be passed to the solver as the primal-dual map. The value of \(3.0\) for the regularization parameter \(\gamma\) was chosen on the basis of the visual appeal of the returned solution.
```
>>> import pyunlocbox
>>>
>>> # Set the functions in the problem
>>> gamma = 3.0
>>> d = pyunlocbox.functions.dummy()
>>> r = pyunlocbox.functions.norm_l1()
>>> f = pyunlocbox.functions.norm_l2(w=M, y=subsampled_noisy_label_signal,
... lambda_=gamma)
>>>
>>> # Define the solver
>>> G.compute_differential_operator()
>>> L = G.D.toarray()
>>> step = 0.999 / (1 + np.linalg.norm(L))
>>> solver = pyunlocbox.solvers.mlfbf(L=L, step=step)
>>>
>>> # Solve the problem
>>> x0 = subsampled_noisy_label_signal.copy()
>>> prob1 = pyunlocbox.solvers.solve([d, r, f], solver=solver,
... x0=x0, rtol=0, maxit=1000)
Solution found after 1000 iterations:
objective function f(sol) = 2.024594e+02
stopping criterion: MAXIT
>>>
>>> G.plot_signal(prob1['sol'])
```
This figure shows the label signal recovered by graph total variation regularization. We can confirm here that this sort of regularization does indeed promote piecewise-constant solutions.
```
>>> # Set the functions in the problem
>>> r = pyunlocbox.functions.norm_l2(A=L, tight=False)
>>>
>>> # Define the solver
>>> step = 0.999 / np.linalg.norm(np.dot(L.T, L) + gamma * np.diag(M), 2)
>>> solver = pyunlocbox.solvers.gradient_descent(step=step)
>>>
>>> # Solve the problem
>>> x0 = subsampled_noisy_label_signal.copy()
>>> prob2 = pyunlocbox.solvers.solve([r, f], solver=solver,
... x0=x0, rtol=0, maxit=1000)
Solution found after 1000 iterations:
objective function f(sol) = 9.555135e+01
stopping criterion: MAXIT
>>>
>>> G.plot_signal(prob2['sol'])
```
This last figure shows the label signal recovered by Tikhonov regularization. As expected, the recovered label signal has smoother transitions than the one obtained by graph TV regularization.
#### Graph multiresolution: Kron pyramid[¶](#graph-multiresolution-kron-pyramid)
In this demonstration file, we show how to reduce a graph using the PyGSP. Then we apply the pyramid to simple signal.
To start open a python shell (IPython is recommended here) and import the required packages. You would probably also import numpy as you will need it to create matrices and arrays.
```
>>> import numpy as np
>>> from pygsp import graphs, reduction
```
For this demo we will be using a sensor graph with 512 nodes.
```
>>> G = graphs.Sensor(512, distribute=True)
>>> G.compute_fourier_basis()
```
The function graph_multiresolution computes the graph pyramid for you:
```
>>> levels = 5
>>> Gs = reduction.graph_multiresolution(G, levels, sparsify=False)
```
Next, we will compute the fourier basis of our different graph layers:
```
>>> for gr in Gs:
... gr.compute_fourier_basis()
```
Those that were already computed are returning with an error, meaning that nothing happened.
Let’s now create two signals and a filter, resp f, f2 and g:
```
>>> f = np.ones((G.N))
>>> f[np.arange(G.N//2)] = -1
>>> f = f + 10*Gs[0].U[:, 7]
```
```
>>> f2 = np.ones((G.N, 2))
>>> f2[np.arange(G.N//2)] = -1
```
```
>>> g = [lambda x: 5./(5 + x)]
```
We will run the analysis of the two signals on the pyramid and obtain a coarse approximation for each layer, with decreasing number of nodes.
Additionally, we will also get prediction errors at each node at every layer.
```
>>> ca, pe = reduction.pyramid_analysis(Gs, f, h_filters=g, method='exact')
>>> ca2, pe2 = reduction.pyramid_analysis(Gs, f2, h_filters=g, method='exact')
```
Given the pyramid, the coarsest approximation and the prediction errors, we will now reconstruct the original signal on the full graph.
```
>>> f_pred, _ = reduction.pyramid_synthesis(Gs, ca[levels], pe, method='exact')
>>> f_pred2, _ = reduction.pyramid_synthesis(Gs, ca2[levels], pe2, method='exact')
```
Here are the final errors for each signal after reconstruction.
```
>>> err = np.linalg.norm(f_pred-f)/np.linalg.norm(f)
>>> err2 = np.linalg.norm(f_pred2-f2)/np.linalg.norm(f2)
>>> assert (err < 1e-10) & (err2 < 1e-10)
```
### Reference guide[¶](#module-pygsp)
The [`pygsp`](#module-pygsp) package is mainly organized around the following two modules:
* [`graphs`](index.html#module-pygsp.graphs) to create and manipulate various kinds of graphs,
* [`filters`](index.html#module-pygsp.filters) to create and manipulate various graph filters.
Moreover, the following modules provide additional functionality:
* [`plotting`](index.html#module-pygsp.plotting) to plot,
* [`reduction`](index.html#module-pygsp.reduction) to reduce a graph while keeping its structure,
* [`features`](index.html#module-pygsp.features) to compute features on graphs,
* [`optimization`](index.html#module-pygsp.optimization) to help solving convex optimization problems,
* [`utils`](index.html#module-pygsp.utils) for various utilities.
#### Graphs[¶](#module-pygsp.graphs)
The [`pygsp.graphs`](#module-pygsp.graphs) module implements the graph class hierarchy. A graph object is either constructed from an adjacency matrix, or by instantiating one of the built-in graph models.
##### Interface[¶](#interface)
The [`Graph`](#pygsp.graphs.Graph) base class allows to construct a graph object from any adjacency matrix and provides a common interface to that object. Derived classes then allows to instantiate various standard graph models.
###### Matrix operators[¶](#matrix-operators)
| `Graph.W` | |
| `Graph.L` | |
| [`Graph.U`](#pygsp.graphs.Graph.U) | Fourier basis (eigenvectors of the Laplacian). |
| [`Graph.D`](#pygsp.graphs.Graph.D) | Differential operator (for gradient and divergence). |
###### Checks[¶](#checks)
| [`Graph.check_weights`](#pygsp.graphs.Graph.check_weights)(self) | Check the characteristics of the weights matrix. |
| [`Graph.is_connected`](#pygsp.graphs.Graph.is_connected)(self[, recompute]) | Check the strong connectivity of the graph (cached). |
| [`Graph.is_directed`](#pygsp.graphs.Graph.is_directed)(self[, recompute]) | Check if the graph has directed edges (cached). |
###### Attributes computation[¶](#attributes-computation)
| [`Graph.compute_laplacian`](#pygsp.graphs.Graph.compute_laplacian)(self[, lap_type]) | Compute a graph Laplacian. |
| [`Graph.estimate_lmax`](#pygsp.graphs.Graph.estimate_lmax)(self[, recompute]) | Estimate the Laplacian’s largest eigenvalue (cached). |
| [`Graph.compute_fourier_basis`](#pygsp.graphs.Graph.compute_fourier_basis)(self[, recompute]) | Compute the Fourier basis of the graph (cached). |
| [`Graph.compute_differential_operator`](#pygsp.graphs.Graph.compute_differential_operator)(self) | Compute the graph differential operator (cached). |
###### Differential operators[¶](#differential-operators)
| [`Graph.grad`](#pygsp.graphs.Graph.grad)(self, s) | Compute the gradient of a graph signal. |
| [`Graph.div`](#pygsp.graphs.Graph.div)(self, s) | Compute the divergence of a graph signal. |
###### Localization[¶](#localization)
| [`Graph.modulate`](#pygsp.graphs.Graph.modulate)(self, f, k) | Modulate the signal *f* to the frequency *k*. |
| [`Graph.translate`](#pygsp.graphs.Graph.translate)(self, f, i) | Translate the signal *f* to the node *i*. |
###### Transforms (frequency and vertex-frequency)[¶](#transforms-frequency-and-vertex-frequency)
| [`Graph.gft`](#pygsp.graphs.Graph.gft)(self, s) | Compute the graph Fourier transform. |
| [`Graph.igft`](#pygsp.graphs.Graph.igft)(self, s_hat) | Compute the inverse graph Fourier transform. |
| [`Graph.gft_windowed`](#pygsp.graphs.Graph.gft_windowed)(self, g, f[, lowmemory]) | Windowed graph Fourier transform. |
| [`Graph.gft_windowed_gabor`](#pygsp.graphs.Graph.gft_windowed_gabor)(self, s, k) | Gabor windowed graph Fourier transform. |
| [`Graph.gft_windowed_normalized`](#pygsp.graphs.Graph.gft_windowed_normalized)(self, g, f[, …]) | Normalized windowed graph Fourier transform. |
###### Plotting[¶](#plotting)
| [`Graph.plot`](#pygsp.graphs.Graph.plot)(self, \*\*kwargs) | Plot the graph. |
| [`Graph.plot_signal`](#pygsp.graphs.Graph.plot_signal)(self, signal, \*\*kwargs) | Plot a signal on that graph. |
| [`Graph.plot_spectrogram`](#pygsp.graphs.Graph.plot_spectrogram)(self, \*\*kwargs) | Plot the graph’s spectrogram. |
###### Others[¶](#others)
| [`Graph.get_edge_list`](#pygsp.graphs.Graph.get_edge_list)(self) | Return an edge list, an alternative representation of the graph. |
| [`Graph.set_coordinates`](#pygsp.graphs.Graph.set_coordinates)(self[, kind]) | Set node’s coordinates (their position when plotting). |
| [`Graph.subgraph`](#pygsp.graphs.Graph.subgraph)(self, ind) | Create a subgraph given indices. |
| [`Graph.extract_components`](#pygsp.graphs.Graph.extract_components)(self) | Split the graph into connected components. |
##### Graph models[¶](#graph-models)
| `Airfoil`(**kwargs) | Airfoil graph. |
| `BarabasiAlbert`([N, m0, m, seed]) | Barabasi-Albert preferential attachment. |
| `Comet`([N, k]) | Comet graph. |
| `Community`([N, Nc, min_comm, min_deg, …]) | Community graph. |
| `DavidSensorNet`([N, seed]) | Sensor network. |
| `ErdosRenyi`([N, p, directed, self_loops, …]) | Erdos Renyi graph. |
| `FullConnected`([N]) | Fully connected graph. |
| `Grid2d`([N1, N2]) | 2-dimensional grid graph. |
| `Logo`(**kwargs) | GSP logo. |
| `LowStretchTree`([k]) | Low stretch tree. |
| `Minnesota`([connect]) | Minnesota road network (from MatlabBGL). |
| `Path`([N]) | Path graph. |
| `RandomRegular`([N, k, maxIter, seed]) | Random k-regular graph. |
| `RandomRing`([N, seed]) | Ring graph with randomly sampled nodes. |
| `Ring`([N, k]) | K-regular ring graph. |
| `Sensor`([N, Nc, regular, n_try, distribute, …]) | Random sensor graph. |
| `StochasticBlockModel`([N, k, z, M, p, q, …]) | Stochastic Block Model (SBM). |
| `SwissRoll`([N, a, b, dim, thresh, s, noise, …]) | Sampled Swiss roll manifold. |
| `Torus`([Nv, Mv]) | Sampled torus manifold. |
###### Nearest-neighbors graphs constructed from point clouds[¶](#nearest-neighbors-graphs-constructed-from-point-clouds)
| `NNGraph`(Xin[, NNtype, use_flann, center, …]) | Nearest-neighbor graph from given point cloud. |
| `Bunny`(**kwargs) | Stanford bunny (NN-graph). |
| `Cube`([radius, nb_pts, nb_dim, sampling, seed]) | Hyper-cube (NN-graph). |
| `ImgPatches`(img[, patch_shape]) | NN-graph between patches of an image. |
| `Grid2dImgPatches`(img[, aggregate]) | Union of a patch graph with a 2D grid graph. |
| `Sphere`([radius, nb_pts, nb_dim, sampling, seed]) | Spherical-shaped graph (NN-graph). |
| `TwoMoons`([moontype, dim, sigmag, N, sigmad, …]) | Two Moons (NN-graph). |
*class* `pygsp.graphs.``Graph`(*W*, *gtype='unknown'*, *lap_type='combinatorial'*, *coords=None*, *plotting={}*)[[source]](_modules/pygsp/graphs/graph.html#Graph)[¶](#pygsp.graphs.Graph)
Base graph class.
* Provide a common interface (and implementation) to graph objects.
* Can be instantiated to construct custom graphs from a weight matrix.
* Initialize attributes for derived classes.
Parameters
**W**sparse matrix or ndarrayThe weight matrix which encodes the graph.
**gtype**stringGraph type, a free-form string to help us recognize the kind of graph we are dealing with (default is ‘unknown’).
**lap_type**‘combinatorial’, ‘normalized’The type of Laplacian to be computed by [`compute_laplacian()`](#pygsp.graphs.Graph.compute_laplacian)
(default is ‘combinatorial’).
**coords**ndarrayVertices coordinates (default is None).
**plotting**dictPlotting parameters.
Examples
```
>>> W = np.arange(4).reshape(2, 2)
>>> G = graphs.Graph(W)
```
Attributes
**N**intthe number of nodes / vertices in the graph.
**Ne**intthe number of edges / links in the graph, i.e. connections between nodes.
**W**sparse matrixthe weight matrix which contains the weights of the connections.
It is represented as an N-by-N matrix of floats.
\(W_{i,j} = 0\) means that there is no direct connection from i to j.
**gtype**stringthe graph type is a short description of the graph object designed to help sorting the graphs.
**L**sparse matrixthe graph Laplacian, an N-by-N matrix computed from W.
**lap_type**‘normalized’, ‘combinatorial’the kind of Laplacian that was computed by [`compute_laplacian()`](#pygsp.graphs.Graph.compute_laplacian).
**coords**ndarrayvertices coordinates in 2D or 3D space. Used for plotting only. Default is None.
**plotting**dictplotting parameters.
`check_weights`(*self*)[[source]](_modules/pygsp/graphs/graph.html#Graph.check_weights)[¶](#pygsp.graphs.Graph.check_weights)
Check the characteristics of the weights matrix.
Returns
A dict of bools containing informations about the matrix
**has_inf_val**boolTrue if the matrix has infinite values else false
**has_nan_value**boolTrue if the matrix has a “not a number” value else false
**is_not_square**boolTrue if the matrix is not square else false
**diag_is_not_zero**boolTrue if the matrix diagonal has not only zeros else false
Examples
```
>>> W = np.arange(4).reshape(2, 2)
>>> G = graphs.Graph(W)
>>> cw = G.check_weights()
>>> cw == {'has_inf_val': False, 'has_nan_value': False,
... 'is_not_square': False, 'diag_is_not_zero': True}
True
```
`compute_differential_operator`(*self*)[¶](#pygsp.graphs.Graph.compute_differential_operator)
Compute the graph differential operator (cached).
The differential operator is a matrix such that
\[L = D^T D,\]
where \(D\) is the differential operator and \(L\) is the graph Laplacian. It is used to compute the gradient and the divergence of a graph signal, see [`grad()`](#pygsp.graphs.Graph.grad) and [`div()`](#pygsp.graphs.Graph.div).
The result is cached and accessible by the [`D`](#pygsp.graphs.Graph.D) property.
See also
[`grad`](#pygsp.graphs.Graph.grad)compute the gradient
[`div`](#pygsp.graphs.Graph.div)compute the divergence
Examples
```
>>> G = graphs.Logo()
>>> G.N, G.Ne
(1130, 3131)
>>> G.compute_differential_operator()
>>> G.D.shape == (G.Ne, G.N)
True
```
`compute_fourier_basis`(*self*, *recompute=False*)[¶](#pygsp.graphs.Graph.compute_fourier_basis)
Compute the Fourier basis of the graph (cached).
The result is cached and accessible by the [`U`](#pygsp.graphs.Graph.U), [`e`](#pygsp.graphs.Graph.e),
[`lmax`](#pygsp.graphs.Graph.lmax), and [`mu`](#pygsp.graphs.Graph.mu) properties.
Parameters
**recompute: bool**Force to recompute the Fourier basis if already existing.
Notes
‘G.compute_fourier_basis()’ computes a full eigendecomposition of the graph Laplacian \(L\) such that:
\[L = U \Lambda U^*,\]
where \(\Lambda\) is a diagonal matrix of eigenvalues and the columns of \(U\) are the eigenvectors.
*G.e* is a vector of length *G.N* containing the Laplacian eigenvalues. The largest eigenvalue is stored in *G.lmax*.
The eigenvectors are stored as column vectors of *G.U* in the same order that the eigenvalues. Finally, the coherence of the Fourier basis is found in *G.mu*.
References
See [[Chu97]](index.html#chung1997spectral).
Examples
```
>>> G = graphs.Torus()
>>> G.compute_fourier_basis()
>>> G.U.shape
(256, 256)
>>> G.e.shape
(256,)
>>> G.lmax == G.e[-1]
True
>>> G.mu < 1 True
```
`compute_laplacian`(*self*, *lap_type='combinatorial'*)[[source]](_modules/pygsp/graphs/graph.html#Graph.compute_laplacian)[¶](#pygsp.graphs.Graph.compute_laplacian)
Compute a graph Laplacian.
The result is accessible by the L attribute.
Parameters
**lap_type**‘combinatorial’, ‘normalized’The type of Laplacian to compute. Default is combinatorial.
Notes
For undirected graphs, the combinatorial Laplacian is defined as
\[L = D - W,\]
where \(W\) is the weight matrix and \(D\) the degree matrix,
and the normalized Laplacian is defined as
\[L = I - D^{-1/2} W D^{-1/2},\]
where \(I\) is the identity matrix.
Examples
```
>>> G = graphs.Sensor(50)
>>> G.L.shape
(50, 50)
>>>
>>> G.compute_laplacian('combinatorial')
>>> G.compute_fourier_basis()
>>> -1e-10 < G.e[0] < 1e-10 # Smallest eigenvalue close to 0.
True
>>>
>>> G.compute_laplacian('normalized')
>>> G.compute_fourier_basis(recompute=True)
>>> -1e-10 < G.e[0] < 1e-10 < G.e[-1] < 2 # Spectrum in [0, 2].
True
```
`div`(*self*, *s*)[¶](#pygsp.graphs.Graph.div)
Compute the divergence of a graph signal.
The divergence of a signal \(s\) is defined as
\[y = D^T s,\]
where \(D\) is the differential operator [`D`](#pygsp.graphs.Graph.D).
Parameters
**s**ndarraySignal of length G.Ne/2 living on the edges (non-directed graph).
Returns
**s_div**ndarrayDivergence signal of length G.N living on the nodes.
See also
[`compute_differential_operator`](#pygsp.graphs.Graph.compute_differential_operator)
[`grad`](#pygsp.graphs.Graph.grad)compute the gradient
Examples
```
>>> G = graphs.Logo()
>>> G.N, G.Ne
(1130, 3131)
>>> s = np.random.normal(size=G.Ne)
>>> s_div = G.div(s)
>>> s_grad = G.grad(s_div)
```
`estimate_lmax`(*self*, *recompute=False*)[[source]](_modules/pygsp/graphs/graph.html#Graph.estimate_lmax)[¶](#pygsp.graphs.Graph.estimate_lmax)
Estimate the Laplacian’s largest eigenvalue (cached).
The result is cached and accessible by the [`lmax`](#pygsp.graphs.Graph.lmax) property.
Exact value given by the eigendecomposition of the Laplacian, see
[`compute_fourier_basis()`](#pygsp.graphs.Graph.compute_fourier_basis). That estimation is much faster than the eigendecomposition.
Parameters
**recompute**booleanForce to recompute the largest eigenvalue. Default is false.
Notes
Runs the implicitly restarted Lanczos method with a large tolerance,
then increases the calculated largest eigenvalue by 1 percent. For much of the PyGSP machinery, we need to approximate wavelet kernels on an interval that contains the spectrum of L. The only cost of using a larger interval is that the polynomial approximation over the larger interval may be a slightly worse approximation on the actual spectrum.
As this is a very mild effect, it is not necessary to obtain very tight bounds on the spectrum of L.
Examples
```
>>> G = graphs.Logo()
>>> G.compute_fourier_basis()
>>> print('{:.2f}'.format(G.lmax))
13.78
>>> G = graphs.Logo()
>>> G.estimate_lmax(recompute=True)
>>> print('{:.2f}'.format(G.lmax))
13.92
```
`extract_components`(*self*)[[source]](_modules/pygsp/graphs/graph.html#Graph.extract_components)[¶](#pygsp.graphs.Graph.extract_components)
Split the graph into connected components.
See [`is_connected()`](#pygsp.graphs.Graph.is_connected) for the method used to determine connectedness.
Returns
**graphs**listA list of graph structures. Each having its own node list and weight matrix. If the graph is directed, add into the info parameter the information about the source nodes and the sink nodes.
Examples
```
>>> from scipy import sparse
>>> W = sparse.rand(10, 10, 0.2)
>>> W = utils.symmetrize(W)
>>> G = graphs.Graph(W=W)
>>> components = G.extract_components()
>>> has_sinks = 'sink' in components[0].info
>>> sinks_0 = components[0].info['sink'] if has_sinks else []
```
`get_edge_list`(*self*)[[source]](_modules/pygsp/graphs/graph.html#Graph.get_edge_list)[¶](#pygsp.graphs.Graph.get_edge_list)
Return an edge list, an alternative representation of the graph.
The weighted adjacency matrix is the canonical form used in this package to represent a graph as it is the easiest to work with when considering spectral methods.
Returns
**v_in**vector of int
**v_out**vector of int
**weights**vector of float
Examples
```
>>> G = graphs.Logo()
>>> v_in, v_out, weights = G.get_edge_list()
>>> v_in.shape, v_out.shape, weights.shape
((3131,), (3131,), (3131,))
```
`gft`(*self*, *s*)[¶](#pygsp.graphs.Graph.gft)
Compute the graph Fourier transform.
The graph Fourier transform of a signal \(s\) is defined as
\[\hat{s} = U^* s,\]
where \(U\) is the Fourier basis attr:U and \(U^*\) denotes the conjugate transpose or Hermitian transpose of \(U\).
Parameters
**s**ndarrayGraph signal in the vertex domain.
Returns
**s_hat**ndarrayRepresentation of s in the Fourier domain.
Examples
```
>>> G = graphs.Logo()
>>> G.compute_fourier_basis()
>>> s = np.random.normal(size=(G.N, 5, 1))
>>> s_hat = G.gft(s)
>>> s_star = G.igft(s_hat)
>>> np.all((s - s_star) < 1e-10)
True
```
`gft_windowed`(*self*, *g*, *f*, *lowmemory=True*)[¶](#pygsp.graphs.Graph.gft_windowed)
Windowed graph Fourier transform.
Parameters
**g**ndarray or FilterWindow (graph signal or kernel).
**f**ndarrayGraph signal in the vertex domain.
**lowmemory**boolUse less memory (default=True).
Returns
**C**ndarrayCoefficients.
`gft_windowed_gabor`(*self*, *s*, *k*)[¶](#pygsp.graphs.Graph.gft_windowed_gabor)
Gabor windowed graph Fourier transform.
Parameters
**s**ndarrayGraph signal in the vertex domain.
**k**functionGabor kernel. See `pygsp.filters.Gabor`.
Returns
**s**ndarrayVertex-frequency representation of the signals.
Examples
```
>>> G = graphs.Logo()
>>> s = np.random.normal(size=(G.N, 2))
>>> s = G.gft_windowed_gabor(s, lambda x: x/(1.-x))
>>> s.shape
(1130, 2, 1130)
```
`gft_windowed_normalized`(*self*, *g*, *f*, *lowmemory=True*)[¶](#pygsp.graphs.Graph.gft_windowed_normalized)
Normalized windowed graph Fourier transform.
Parameters
**g**ndarrayWindow.
**f**ndarrayGraph signal in the vertex domain.
**lowmemory**boolUse less memory. (default = True)
Returns
**C**ndarrayCoefficients.
`grad`(*self*, *s*)[¶](#pygsp.graphs.Graph.grad)
Compute the gradient of a graph signal.
The gradient of a signal \(s\) is defined as
\[y = D s,\]
where \(D\) is the differential operator [`D`](#pygsp.graphs.Graph.D).
Parameters
**s**ndarraySignal of length G.N living on the nodes.
Returns
**s_grad**ndarrayGradient signal of length G.Ne/2 living on the edges (non-directed graph).
See also
[`compute_differential_operator`](#pygsp.graphs.Graph.compute_differential_operator)
[`div`](#pygsp.graphs.Graph.div)compute the divergence
Examples
```
>>> G = graphs.Logo()
>>> G.N, G.Ne
(1130, 3131)
>>> s = np.random.normal(size=G.N)
>>> s_grad = G.grad(s)
>>> s_div = G.div(s_grad)
>>> np.linalg.norm(s_div - G.L.dot(s)) < 1e-10 True
```
`igft`(*self*, *s_hat*)[¶](#pygsp.graphs.Graph.igft)
Compute the inverse graph Fourier transform.
The inverse graph Fourier transform of a Fourier domain signal
\(\hat{s}\) is defined as
\[s = U \hat{s},\]
where \(U\) is the Fourier basis [`U`](#pygsp.graphs.Graph.U).
Parameters
**s_hat**ndarrayGraph signal in the Fourier domain.
Returns
**s**ndarrayRepresentation of s_hat in the vertex domain.
Examples
```
>>> G = graphs.Logo()
>>> G.compute_fourier_basis()
>>> s_hat = np.random.normal(size=(G.N, 5, 1))
>>> s = G.igft(s_hat)
>>> s_hat_star = G.gft(s)
>>> np.all((s_hat - s_hat_star) < 1e-10)
True
```
`is_connected`(*self*, *recompute=False*)[[source]](_modules/pygsp/graphs/graph.html#Graph.is_connected)[¶](#pygsp.graphs.Graph.is_connected)
Check the strong connectivity of the graph (cached).
It uses DFS travelling on graph to ensure that each node is visited.
For undirected graphs, starting at any vertex and trying to access all others is enough.
For directed graphs, one needs to check that a random vertex is accessible by all others and can access all others. Thus, we can transpose the adjacency matrix and compute again with the same starting point in both phases.
Parameters
**recompute: bool**Force to recompute the connectivity if already known.
Returns
**connected**boolTrue if the graph is connected.
Examples
```
>>> from scipy import sparse
>>> W = sparse.rand(10, 10, 0.2)
>>> G = graphs.Graph(W=W)
>>> connected = G.is_connected()
```
`is_directed`(*self*, *recompute=False*)[[source]](_modules/pygsp/graphs/graph.html#Graph.is_directed)[¶](#pygsp.graphs.Graph.is_directed)
Check if the graph has directed edges (cached).
In this framework, we consider that a graph is directed if and only if its weight matrix is non symmetric.
Parameters
**recompute**boolForce to recompute the directedness if already known.
Returns
**directed**boolTrue if the graph is directed.
Notes
Can also be used to check if a matrix is symmetrical
Examples
```
>>> from scipy import sparse
>>> W = sparse.rand(10, 10, 0.2)
>>> G = graphs.Graph(W=W)
>>> directed = G.is_directed()
```
`modulate`(*self*, *f*, *k*)[[source]](_modules/pygsp/graphs/graph.html#Graph.modulate)[¶](#pygsp.graphs.Graph.modulate)
Modulate the signal *f* to the frequency *k*.
Parameters
**f**ndarraySignal (column)
**k**intIndex of frequencies
Returns
**fm**ndarrayModulated signal
`plot`(*self*, *\*\*kwargs*)[[source]](_modules/pygsp/graphs/graph.html#Graph.plot)[¶](#pygsp.graphs.Graph.plot)
Plot the graph.
See `pygsp.plotting.plot_graph()`.
`plot_signal`(*self*, *signal*, *\*\*kwargs*)[[source]](_modules/pygsp/graphs/graph.html#Graph.plot_signal)[¶](#pygsp.graphs.Graph.plot_signal)
Plot a signal on that graph.
See `pygsp.plotting.plot_signal()`.
`plot_spectrogram`(*self*, *\*\*kwargs*)[[source]](_modules/pygsp/graphs/graph.html#Graph.plot_spectrogram)[¶](#pygsp.graphs.Graph.plot_spectrogram)
Plot the graph’s spectrogram.
See `pygsp.plotting.plot_spectrogram()`.
`set_coordinates`(*self*, *kind='spring'*, *\*\*kwargs*)[[source]](_modules/pygsp/graphs/graph.html#Graph.set_coordinates)[¶](#pygsp.graphs.Graph.set_coordinates)
Set node’s coordinates (their position when plotting).
Parameters
**kind**string or array-likeKind of coordinates to generate. It controls the position of the nodes when plotting the graph. Can either pass an array of size Nx2 or Nx3 to set the coordinates manually or the name of a layout algorithm. Available algorithms: community2D, random2D, random3D,
ring2D, line1D, spring. Default is ‘spring’.
**kwargs**dictAdditional parameters to be passed to the Fruchterman-Reingold force-directed algorithm when kind is spring.
Examples
```
>>> G = graphs.ErdosRenyi()
>>> G.set_coordinates()
>>> G.plot()
```
`subgraph`(*self*, *ind*)[[source]](_modules/pygsp/graphs/graph.html#Graph.subgraph)[¶](#pygsp.graphs.Graph.subgraph)
Create a subgraph given indices.
Parameters
**ind**listNodes to keep
Returns
**sub_G**GraphSubgraph
Examples
```
>>> W = np.arange(16).reshape(4, 4)
>>> G = graphs.Graph(W)
>>> ind = [1, 3]
>>> sub_G = G.subgraph(ind)
```
`translate`(*self*, *f*, *i*)[¶](#pygsp.graphs.Graph.translate)
Translate the signal *f* to the node *i*.
Parameters
**f**ndarraySignal
**i**intIndices of vertex
Returns
**ft**translate signal
*property* `A`[¶](#pygsp.graphs.Graph.A)
Graph adjacency matrix (the binary version of W).
The adjacency matrix defines which edges exist on the graph.
It is represented as an N-by-N matrix of booleans.
\(A_{i,j}\) is True if \(W_{i,j} > 0\).
*property* `D`[¶](#pygsp.graphs.Graph.D)
Differential operator (for gradient and divergence).
Is computed by [`compute_differential_operator()`](#pygsp.graphs.Graph.compute_differential_operator).
*property* `U`[¶](#pygsp.graphs.Graph.U)
Fourier basis (eigenvectors of the Laplacian).
Is computed by [`compute_fourier_basis()`](#pygsp.graphs.Graph.compute_fourier_basis).
*property* `d`[¶](#pygsp.graphs.Graph.d)
The degree (the number of neighbors) of each node.
*property* `dw`[¶](#pygsp.graphs.Graph.dw)
The weighted degree (the sum of weighted edges) of each node.
*property* `e`[¶](#pygsp.graphs.Graph.e)
Eigenvalues of the Laplacian (square of graph frequencies).
Is computed by [`compute_fourier_basis()`](#pygsp.graphs.Graph.compute_fourier_basis).
*property* `lmax`[¶](#pygsp.graphs.Graph.lmax)
Largest eigenvalue of the graph Laplacian.
Can be exactly computed by [`compute_fourier_basis()`](#pygsp.graphs.Graph.compute_fourier_basis) or approximated by [`estimate_lmax()`](#pygsp.graphs.Graph.estimate_lmax).
*property* `mu`[¶](#pygsp.graphs.Graph.mu)
Coherence of the Fourier basis.
Is computed by [`compute_fourier_basis()`](#pygsp.graphs.Graph.compute_fourier_basis).
#### Filters[¶](#module-pygsp.filters)
The [`pygsp.filters`](#module-pygsp.filters) module implements methods used for filtering and defines commonly used filters that can be applied to [`pygsp.graphs`](index.html#module-pygsp.graphs). A filter is associated to a graph and is defined with one or several functions.
We define by filter bank a list of filters, usually centered around different frequencies, applied to a single graph.
##### Interface[¶](#interface)
The `Filter` base class implements a common interface to all filters:
| `Filter.evaluate`(self, x) | Evaluate the kernels at given frequencies. |
| `Filter.filter`(self, s[, method, order]) | Filter signals (analysis or synthesis). |
| `Filter.analyze`(self, s[, method, order]) | Convenience alias to `filter()`. |
| `Filter.synthesize`(self, s[, method, order]) | Convenience wrapper around `filter()`. |
| `Filter.compute_frame`(self, \*\*kwargs) | Compute the associated frame. |
| `Filter.estimate_frame_bounds`(self[, min, …]) | Estimate lower and upper frame bounds. |
| `Filter.plot`(self, \*\*kwargs) | Plot the filter bank’s frequency response. |
| `Filter.localize`(self, i, \*\*kwargs) | Localize the kernels at a node (to visualize them). |
##### Filters[¶](#id1)
Then, derived classes implement various common graph filters.
**Filter banks of N filters**
| `Abspline`(G[, Nf, lpfactor, scales]) | Design an A B cubic spline wavelet filter bank. |
| `Gabor`(G, k) | Design a Gabor filter bank. |
| `HalfCosine`(G[, Nf]) | Design an half cosine filter bank (tight frame). |
| `Itersine`(G[, Nf, overlap]) | Design an itersine filter bank (tight frame). |
| `MexicanHat`(G[, Nf, lpfactor, scales, normalize]) | Design a filter bank of Mexican hat wavelets. |
| `Meyer`(G[, Nf, scales]) | Design a filter bank of Meyer wavelets (tight frame). |
| `SimpleTight`(G[, Nf, scales]) | Design a simple tight frame filter bank (tight frame). |
**Filter banks of 2 filters: a low pass and a high pass**
| `Regular`(G[, d]) | Design 2 filters with the regular construction (tight frame). |
| `Held`(G[, a]) | Design 2 filters with the Held construction (tight frame). |
| `Simoncelli`(G[, a]) | Design 2 filters with the Simoncelli construction (tight frame). |
| `Papadakis`(G[, a]) | Design 2 filters with the Papadakis construction (tight frame). |
**Low pass filters**
| `Heat`(G[, tau, normalize]) | Design a filter bank of heat kernels. |
| `Expwin`(G[, bmax, a]) | Design an exponential window filter. |
##### Approximations[¶](#approximations)
Moreover, two approximation methods are provided for fast filtering. The computational complexity of filtering with those approximations is linear with the number of edges. The complexity of the exact solution, which is to use the Fourier basis, is quadratic with the number of nodes (without taking into account the cost of the necessary eigendecomposition of the graph Laplacian).
**Chebyshev polynomials**
| `compute_cheby_coeff`(f[, m, N]) | Compute Chebyshev coefficients for a Filterbank. |
| `compute_jackson_cheby_coeff`(filter_bounds, …) | To compute the m+1 coefficients of the polynomial approximation of an ideal band-pass between a and b, between a range of values defined by lambda_min and lambda_max. |
| `cheby_op`(G, c, signal, \*\*kwargs) | Chebyshev polynomial of graph Laplacian applied to vector. |
| `cheby_rect`(G, bounds, signal, \*\*kwargs) | Fast filtering using Chebyshev polynomial for a perfect rectangle filter. |
**Lanczos algorithm**
| `lanczos`(A, order, x) | TODO short description |
| `lanczos_op`(f, s[, order]) | Perform the lanczos approximation of the signal s. |
#### Plotting[¶](#module-pygsp.plotting)
The [`pygsp.plotting`](#module-pygsp.plotting) module implements functionality to plot PyGSP objects with a [pyqtgraph](http://www.pyqtgraph.org) or [matplotlib](https://matplotlib.org) drawing backend (which can be controlled by the
[`BACKEND`](#pygsp.plotting.BACKEND) constant or individually for each plotting call):
* graphs from [`pygsp.graphs`](index.html#module-pygsp.graphs) with `plot_graph()`,
`plot_spectrogram()`, and `plot_signal()`,
* filters from [`pygsp.filters`](index.html#module-pygsp.filters) with `plot_filter()`.
`pygsp.plotting.``BACKEND`[¶](#pygsp.plotting.BACKEND)
Indicates which drawing backend to use if none are provided to the plotting functions. Should be either ‘matplotlib’ or ‘pyqtgraph’. In general pyqtgraph is better for interactive exploration while matplotlib is better at generating figures to be included in papers or elsewhere.
#### Reduction[¶](#module-pygsp.reduction)
The [`pygsp.reduction`](#module-pygsp.reduction) module implements functionalities for the reduction of graphs’ vertex set while keeping the graph structure.
| `tree_multiresolution`(G, Nlevel[, …]) | Compute a multiresolution of trees |
| `graph_multiresolution`(G, levels[, sparsify, …]) | Compute a pyramid of graphs (by Kron reduction). |
| `kron_reduction`(G, ind) | Compute the Kron reduction. |
| `pyramid_analysis`(Gs, f, \*\*kwargs) | Compute the graph pyramid transform coefficients. |
| `pyramid_synthesis`(Gs, cap, pe[, order]) | Synthesize a signal from its pyramid coefficients. |
| `interpolate`(G, f_subsampled, keep_inds[, …]) | Interpolate a graph signal. |
| `graph_sparsify`(M, epsilon[, maxiter]) | Sparsify a graph (with Spielman-Srivastava). |
#### Features[¶](#module-pygsp.features)
The [`pygsp.features`](#module-pygsp.features) module implements different feature extraction techniques based on [`pygsp.graphs`](index.html#module-pygsp.graphs) and [`pygsp.filters`](index.html#module-pygsp.filters).
#### Optimization[¶](#module-pygsp.optimization)
The [`pygsp.optimization`](#module-pygsp.optimization) module provides tools for convex optimization on graphs.
#### Utilities[¶](#module-pygsp.utils)
The [`pygsp.utils`](#module-pygsp.utils) module implements some utility functions used throughout the package.
### Contributing[¶](#contributing)
Contributions are welcome, and they are greatly appreciated! The development of this package takes place on [GitHub](https://github.com/epfl-lts2/pygsp).
Issues, bugs, and feature requests should be reported [there](https://github.com/epfl-lts2/pygsp/issues).
Code and documentation can be improved by submitting a [pull request](https://github.com/epfl-lts2/pygsp/pulls). Please add documentation and tests for any new code.
The package can be set up (ideally in a virtual environment) for local development with the following:
```
$ git clone https://github.com/epfl-lts2/pygsp.git
$ pip install -U -e pygsp[alldeps,test,doc,pkg]
```
You can improve or add functionality in the `pygsp` folder, along with corresponding unit tests in `pygsp/tests/test_*.py` (with reasonable coverage) and documentation in `doc/reference/*.rst`. If you have a nice example to demonstrate the use of the introduced functionality, please consider adding a tutorial in `doc/tutorials`.
Do not forget to update `README.rst` and `doc/history.rst` with e.g. new features. The version number needs to be updated in `setup.py` and
`pyunlocbox/__init__.py`.
After making any change, please check the style, run the tests, and build the documentation with the following (enforced by Travis CI):
```
$ make lint
$ make test
$ make doc
```
Check the generated coverage report at `htmlcov/index.html` to make sure the tests reasonably cover the changes you’ve introduced.
#### Making a release[¶](#making-a-release)
1. Update the version number and release date in `setup.py`,
`pygsp/__init__.py` and `doc/history.rst`.
2. Create a git tag with `git tag -a v0.5.0 -m "PyGSP v0.5.0"`.
3. Push the tag to GitHub with `git push github v0.5.0`. The tag should now appear in the releases and tags tab.
4. [Create a release](https://github.com/epfl-lts2/pygsp/releases/new) on GitHub and select the created tag. A DOI should then be issued by Zenodo.
5. Go on Zenodo and fix the metadata if necessary.
6. Build the distribution with `make dist` and check that the
`dist/PyGSP-0.5.0.tar.gz` source archive contains all required files. The binary wheel should be found as `dist/PyGSP-0.5.0-py2.py3-none-any.whl`.
7. Test the upload and installation process:
```
$ twine upload --repository-url https://test.pypi.org/legacy/ dist/*
$ pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple pygsp
```
Log in as the LTS2 user.
8. Build and upload the distribution to the real PyPI with `make release`.
#### Repository organization[¶](#repository-organization)
```
LICENSE.txt Project license
*.rst Important documentation Makefile Targets for make setup.py Meta information about package (published on PyPI)
.gitignore Files ignored by the git revision control system
.travis.yml Defines testing on Travis continuous integration
pygsp/ Contains the modules (the actual toolbox implementation)
__init.py__ Load modules at package import
*.py One file per module
pygsp/tests/ Contains the test suites (will be distributed to end user)
__init.py__ Load modules at package import
test_*.py One test suite per module
test_docstrings.py Test the examples in the docstrings (reference doc)
test_tutorials.py Test the tutorials in doc/tutorials
test_all.py Launch all the tests (docstrings, tutorials, modules)
doc/ Package documentation
conf.py Sphinx configuration
index.rst Documentation entry page
*.rst Include doc files from root directory
doc/reference/ Reference documentation
index.rst Reference entry page
*.rst Only directives, the actual doc is alongside the code
doc/tutorials/
index.rst Tutorials entry page
*.rst One file per tutorial
```
### History[¶](#history)
#### 0.5.1 (2017-12-15)[¶](#id1)
The focus of this release was to ease installation by not requiring non-standard scientific Python packages to be installed.
It was mostly a maintenance release. A conda package is now available in conda-forge. Moreover, the package can now be tried online thanks to binder.
The core functionality of this package only depends on numpy and scipy.
Dependencies which are only required for particular usages are included in the alldeps extra dependency list. The alldeps list allows users to install dependencies to enable all the features. Finally, those optional packages are only loaded when needed, not when the PyGSP is imported. A nice side-effect is that importing the PyGSP is now much faster!
The following packages were made optional dependencies:
* scikit-image, as it is only used to build patch graphs from images. The
> problem was that scikit-image does not provide a wheel for Windows and its
> build is painful and error-prone. Moreover, scikit-image has a lot of
> dependencies.
* pyqtgrpah, PyQt5 / PySide and PyOpenGl, as they are only used for interactive visualization, which not many users need. The problem was that pyqtgraph requires (via PyQt5, PySide, PyOpenGL) OpenGL (libGL.so) to be installed.
* matplotlib: while it is a standard package for any scientific or data science workflow, it’s not necessary for users who only want to process data without plotting graphs, signals and filters.
* pyflann, as it is only used for approximate kNN. The problem was that the source distribution would not build for Windows. On conda-forge, (py)flann is not built for Windows either.
Moreover, matplotlib is now the default drawing backend. It’s well integrated with the Jupyter environment for scientific and data science workflows, and most use cases do not require an interactive visualization. The pyqtgraph is still available for interactivity.
#### 0.5.0 (2017-10-06)[¶](#id2)
* Generalized the analysis and synthesis methods into the filter method.
* Signals are now rank-3 tensors of N_NODES x N_SIGNALS x N_FEATURES.
* Filter.evaluate returns a 2D array instead of a list of vectors.
* The differential operator was integrated in the Graph class, as the Fourier basis and the Laplacian were already.
* Removed the operators package. Transforms and differential operators went to the Graph class, the localization operator to the Filter class. These are now easier to use. Reduction became its own module.
* Graph object uses properties to delay the computation (lazy evaluation) of the Fourier basis (G.U, G.e, G.mu), the estimation of the largest eigenvalue
(G.lmax) and the computation of the differential operator (G.D). A warning is issued if client code don’t trigger computations with G.compute_*.
* Approximations for filtering have been moved in the filters package.
* PointCloud object removed. Functionality integrated in Graph object.
* data_handling module merged into utils.
* Fourier basis computed with eigh instead of svd (faster).
* estimate_lmax uses Lanczos instead of Arnoldi (symmetric sparse).
* Add a seed parameter to all non-deterministic graphs and filters.
* Filter.Nf indicates the number of filters in the filter bank.
* Don’t check connectedness on graph creation (can take a lot of time).
* Erdos-Renyi now implemented as SBM with 1 block.
* Many bug fixes (e.g. Minnesota graph, Meyer filter bank, Heat filter, Mexican hat filter bank, Gabor filter bank).
* All GitHub issues fixed.
Plotting:
* Much better handling of plotting parameters.
* With matplotlib backend, plots are shown by default .
* Allows to set a default plotting backend as plotting.BACKEND = ‘pyqtgraph’.
* qtg_default=False becomes backend=’matplotlib’
* Added coordinates for path, ring, and randomring graphs.
* Set good default plotting parameters for most graphs.
* Allows to plot multiple filters in 1D with set_coordinates(‘line1D’).
* Allows to pass existing matplotlib axes to the plotting functions.
* Show colorbar with matplotlib.
* Allows to set a 3D view point.
* Eigenvalues shown as vertical lines instead of crosses.
* Vertices can be highlighted, e.g. to show where filters where localized.
Documentation:
* More comprehensive documentation. Notably math definitions for operators.
* Most filters and graphs are plotted in the API documentation.
* List all methods and models at the top with autosummary.
* Useful package and module-level documentation.
* Doctests don’t need to import numpy and the pygsp every time.
* Figures are automatically generated when building the documentation.
* Build on RTD with conda and matplotlib 2 (prettier plots).
* Intro and wavelets tutorials were updated.
* Reference guide is completely auto-generated from automodule.
* Added contribution guidelines.
* Documentation reorganization.
* Check that hyperlinks are valid.
Tests and infrastructure:
* Start test coverage analysis.
* Much more comprehensive tests. Coverage increased from 40% to 80%.
Many bugs were uncovered.
* Always test with virtual X framebuffer to avoid the opening of lots of windows.
* Tested on Python 2.7, 3.4, 3.5, 3.6.
* Clean configuration files.
* Not using tox anymore (too painful to maintain multiple Pythons locally).
* Sort out installation of dependencies. Plotting should now work right away.
* Completely migrate development on GitHub.
#### 0.4.2 (2017-04-27)[¶](#id3)
* Improve documentation.
* Various fixes.
#### 0.4.1 (2016-09-06)[¶](#id4)
* Added routines to compute coordinates for the graphs.
* Added fast filtering of ideal band-pass.
* Implemented graph spectrograms.
* Added the Barabási-Albert model for graphs.
* Renamed PointClouds features.
* Various fixes.
#### 0.4.0 (2016-06-17)[¶](#id5)
#### 0.3.3 (2016-01-27)[¶](#id6)
* Refactoring graphs using object programming and fail safe checks.
* Refactoring filters to use only the Graph object used at the construction of the filter for all operations.
* Refactoring Graph pyramid to match MATLAB implementation.
* Removal of default coordinates (all vertices on the origin) for graphs that do not possess spatial meaning.
* Correction of minor issues on Python3+ imports.
* Various fixes.
* Finalizing demos for the documentation.
#### 0.3.2 (2016-01-14)[¶](#id7)
#### 0.3.1 (2016-01-12)[¶](#id8)
#### 0.3.0 (2015-12-01)[¶](#id9)
#### 0.2.1 (2015-10-20)[¶](#id10)
* Fix bug on pip installation.
* Update full documentation.
#### 0.2.0 (2015-10-12)[¶](#id11)
* Adding functionalities to match the content of the Matlab GSP Box.
* First release of the PyGSP.
#### 0.1.0 (2015-07-02)[¶](#id12)
* Main features of the box are present most of the graphs and filters can be used.
* The utils and operators modules also have most of their features implemented.
### References[¶](#references)
<NAME>. R. <NAME>. *Spectral Graph Theory*. Vol. 92 of the CBMS Regional Conference Series in Mathematics, American Mathematical Society, 1997. |
testpath | readthedoc | Python | Testpath 0.4.4 documentation
[Testpath](index.html#document-index)
---
Testpath[¶](#testpath)
===
Testpath is a collection of utilities for testing code which uses and manipulates the filesystem and system commands.
Install it with:
```
pip install testpath
```
Contents:
Assertion functions for the filesystem[¶](#assertion-functions-for-the-filesystem)
---
These functions make it easy to check the state of files and directories.
When the assertion is not true, they provide informative error messages.
`testpath.``assert_path_exists`(*path*, *msg=None*)[¶](#testpath.assert_path_exists)
Assert that something exists at the given path.
`testpath.``assert_not_path_exists`(*path*, *msg=None*)[¶](#testpath.assert_not_path_exists)
Assert that nothing exists at the given path.
`testpath.``assert_isfile`(*path*, *follow_symlinks=True*, *msg=None*)[¶](#testpath.assert_isfile)
Assert that path exists and is a regular file.
With follow_symlinks=True, the default, this will pass if path is a symlink to a regular file. With follow_symlinks=False, it will fail in that case.
`testpath.``assert_not_isfile`(*path*, *follow_symlinks=True*, *msg=None*)[¶](#testpath.assert_not_isfile)
Assert that path exists but is not a regular file.
With follow_symlinks=True, the default, this will fail if path is a symlink to a regular file. With follow_symlinks=False, it will pass in that case.
`testpath.``assert_isdir`(*path*, *follow_symlinks=True*, *msg=None*)[¶](#testpath.assert_isdir)
Assert that path exists and is a directory.
With follow_symlinks=True, the default, this will pass if path is a symlink to a directory. With follow_symlinks=False, it will fail in that case.
`testpath.``assert_not_isdir`(*path*, *follow_symlinks=True*, *msg=None*)[¶](#testpath.assert_not_isdir)
Assert that path exists but is not a directory.
With follow_symlinks=True, the default, this will fail if path is a symlink to a directory. With follow_symlinks=False, it will pass in that case.
`testpath.``assert_islink`(*path*, *to=None*, *msg=None*)[¶](#testpath.assert_islink)
Assert that path exists and is a symlink.
If to is specified, also check that it is the target of the symlink.
`testpath.``assert_not_islink`(*path*, *msg=None*)[¶](#testpath.assert_not_islink)
Assert that path exists but is not a symlink.
### Unix specific[¶](#unix-specific)
New in version 0.4.
These additional functions test for special Unix filesystem objects: named pipes and Unix domain sockets. The functions can be used on all platforms, but these types of objects do not exist on Windows.
`testpath.``assert_ispipe`(*path*, *follow_symlinks=True*, *msg=None*)[¶](#testpath.assert_ispipe)
Assert that path exists and is a named pipe (FIFO).
With follow_symlinks=True, the default, this will pass if path is a symlink to a named pipe. With follow_symlinks=False, it will fail in that case.
`testpath.``assert_not_ispipe`(*path*, *follow_symlinks=True*, *msg=None*)[¶](#testpath.assert_not_ispipe)
Assert that path exists but is not a named pipe (FIFO).
With follow_symlinks=True, the default, this will fail if path is a symlink to a named pipe. With follow_symlinks=False, it will pass in that case.
`testpath.``assert_issocket`(*path*, *follow_symlinks=True*, *msg=None*)[¶](#testpath.assert_issocket)
Assert that path exists and is a Unix domain socket.
With follow_symlinks=True, the default, this will pass if path is a symlink to a Unix domain socket. With follow_symlinks=False, it will fail in that case.
`testpath.``assert_not_issocket`(*path*, *follow_symlinks=True*, *msg=None*)[¶](#testpath.assert_not_issocket)
Assert that path exists but is not a Unix domain socket.
With follow_symlinks=True, the default, this will fail if path is a symlink to a Unix domain socket. With follow_symlinks=False, it will pass in that case.
Mocking system commands[¶](#mocking-system-commands)
---
Mocking is a technique to replace parts of a system with interfaces that don’t do anything, but which your tests can check whether and how they were called.
The [`unittest.mock`](https://docs.python.org/3/library/unittest.mock.html#module-unittest.mock) module in Python 3 lets you mock Python functions and classes. The tools described here let you mock external commands.
Commands are mocked by creating a real file in a temporary directory which is added to the `PATH` environment variable, not by replacing Python functions. So if you mock `git`, and your Python code runs a shell script which calls `git`, it will be the mocked command that it runs.
By default, mocked commands record each call made to them, so that your test can check these. Using the [`MockCommand`](#testpath.MockCommand) API, you can change what a mocked command does.
Note
Mocking a command affects all running threads or coroutines in a program.
There’s no way to mock a command for only the current thread/coroutine,
because it uses environment variables, which are global.
`testpath.``assert_calls`(*cmd*, *args=None*)[¶](#testpath.assert_calls)
Assert that a block of code runs the given command.
If args is passed, also check that it was called at least once with the given arguments (not including the command name).
Use as a context manager, e.g.:
```
with assert_calls('git'):
some_function_wrapping_git()
with assert_calls('git', ['add', myfile]):
some_other_function()
```
*class* `testpath.``MockCommand`(*name*, *content=None*, *python=''*)[¶](#testpath.MockCommand)
Context manager to mock a system command.
The mock command will be written to a directory at the front of $PATH,
taking precedence over any existing command with the same name.
The *python* parameter accepts a string of code for the command to run,
in addition to the default behaviour of recording calls to the command.
This will run with the same Python interpreter as the calling code, but in a new process.
The *content* parameter gives extra control, by providing a script which will run with no additions. On Unix, it should start with a shebang (e.g.
`#!/usr/bin/env python`) specifying the interpreter. On Windows, it will always be run by the same Python interpreter as the calling code.
Calls to the command will not be recorded when content is specified.
*classmethod* `fixed_output`(*name*, *stdout=''*, *stderr=''*, *exit_status=0*)[¶](#testpath.MockCommand.fixed_output)
Make a mock command, producing fixed output when it is run:
```
t = 'Sat 24 Apr 17:11:58 BST 2021\n'
with MockCommand.fixed_output('date', t) as mock_date:
...
```
The stdout & stderr strings will be written to the respective streams,
and the process will exit with the specified numeric status (the default of 0 indicates success).
This works with the recording mechanism, so you can check what arguments this command was called with.
`get_calls`()[¶](#testpath.MockCommand.get_calls)
Get a list of calls made to this mocked command.
For each time the command was run, the list will contain a dictionary with keys argv, env and cwd.
This won’t work if you used the *content* parameter to alter what the mocked command does.
`assert_called`(*args=None*)[¶](#testpath.MockCommand.assert_called)
Assert that the mock command has been called at least once.
If args is passed, also check that it was called at least once with the given arguments (not including the command name), e.g.:
```
with MockCommand('rsync') as mock_rsync:
function_to_test()
mock_rsync.assert_called(['/var/log', 'backup-server:logs'])
```
This won’t work if you used the *content* parameter to alter what the mocked command does.
Modifying environment variables[¶](#modifying-environment-variables)
---
These functions allow you to temporarily modify the environment variables, which is often useful for testing code that calls other processes.
`testpath.``modified_env`(*changes*, *snapshot=True*)[¶](#testpath.modified_env)
Temporarily modify environment variables.
Specify the changes as a dictionary mapping names to new values, using None as the value for names that should be deleted.
Example use:
```
with modified_env({'SHELL': 'bash', 'PYTHONPATH': None}):
...
```
When the context exits, there are two possible ways to restore the environment. If *snapshot* is True, the default, it will reset the whole environment to its state when the context was entered. If *snapshot* is False, it will restore only the specific variables it modified, leaving any changes made to other environment variables in the context.
`testpath.``temporary_env`(*newenv*)[¶](#testpath.temporary_env)
Completely replace the environment variables with the specified dict.
Use as a context manager:
```
with temporary_env({'PATH': my_path}):
...
```
`testpath.``make_env_restorer`()[¶](#testpath.make_env_restorer)
Snapshot the current environment, return a function to restore that.
This is intended to produce cleanup functions for tests. For example,
using the [`unittest.TestCase`](https://docs.python.org/3/library/unittest.html#unittest.TestCase) API:
```
def setUp(self):
self.addCleanup(testpath.make_env_restorer())
```
Any changes a test makes to the environment variables will be wiped out before the next test is run.
Utilities for temporary directories[¶](#module-testpath.tempdir)
---
The [`testpath.tempdir`](#module-testpath.tempdir) module contains a couple of utilities for working with temporary directories:
*class* `testpath.tempdir.``NamedFileInTemporaryDirectory`(*filename*, *mode='w+b'*, *bufsize=-1*, ***kwds*)[¶](#testpath.tempdir.NamedFileInTemporaryDirectory)
Open a file named filename in a temporary directory.
This context manager is preferred over `tempfile.NamedTemporaryFile`
when one needs to reopen the file, because on Windows only one handle on a file can be open at a time. You can close the returned handle explicitly inside the context without deleting the file, and the context manager will delete the whole directory when it exits.
Arguments mode and bufsize are passed to open.
Rest of the arguments are passed to TemporaryDirectory.
Usage example:
```
with NamedFileInTemporaryDirectory('myfile', 'wb') as f:
f.write('stuff')
f.close()
# You can now pass f.name to things that will re-open the file
```
*class* `testpath.tempdir.``TemporaryWorkingDirectory`(*suffix=None*, *prefix=None*, *dir=None*)[¶](#testpath.tempdir.TemporaryWorkingDirectory)
Creates a temporary directory and sets the cwd to that directory.
Automatically reverts to previous cwd upon cleanup.
Usage example:
```
with TemporaryWorkingDirectory() as tmpdir:
...
```
Release notes[¶](#release-notes)
---
### 0.6[¶](#id1)
February 2022
* Removed some code that’s unused since dropping Python 2 support.
* Relax the version constraint for the `flit_core` build requirement.
### 0.5[¶](#id2)
May 2021
* Easier ways to use [`MockCommand`](index.html#testpath.MockCommand) to customise mocked commands,
including `python=` to specify extra code to run,
[`fixed_output()`](index.html#testpath.MockCommand.fixed_output), and [`assert_called()`](index.html#testpath.MockCommand.assert_called).
* Command mocking will use [`os.defpath`](https://docs.python.org/3/library/os.html#os.defpath) as the initial PATH if the PATH environment variable is not set.
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Search Page](search.html) |
modopt | cran | R | Package ‘modopt.matlab’
October 13, 2022
Type Package
Title 'MatLab'-Style Modeling of Optimization Problems
Version 1.0-2
Date 2018-08-17
Maintainer <NAME> <<EMAIL>>
Description 'MatLab'-Style Modeling of Optimization Problems with 'R'. This package pro-
vides a set of convenience functions to transform a 'MatLab'-style optimization modeling struc-
ture to its 'ROI' equivalent.
Depends R (>= 3.4), ROI, ROI.plugin.glpk, ROI.plugin.quadprog
License MIT + file LICENSE
URL http://www.finance-r.com/
RoxygenNote 6.1.0
NeedsCompilation no
Author <NAME> [aut, cre]
Repository CRAN
Date/Publication 2018-08-18 16:00:03 UTC
R topics documented:
modopt.matlab-packag... 2
intlinpro... 2
linpro... 3
quadpro... 4
modopt.matlab-package MatLab(R)-style Optimization Modeling in R using ROI
Description
’MatLab’-Style Modeling of Optimization Problems with ’R’. This package provides a set of conve-
nience functions to transform a ’MatLab’-style optimization modeling structure to its ’ROI’ equiv-
alent.
Author(s)
<NAME>, <<EMAIL>>
References
http://www.finance-r.com/
See Also
Useful links:
• http://www.finance-r.com/
intlinprog MatLab(R)-style Mixed Integer Linear Programming in R using ROI
Description
intlinprog provides a simple interface to ROI using the optimization model specification of Mat-
Lab(R)
minimize in x: f’*x subject to A*x <= b Aeq*x == beq x >= lb x <= ub
Usage
intlinprog(f, intcon = NULL, A = NULL, b = NULL, Aeq = NULL,
beq = NULL, lb = NULL, ub = NULL, x0 = NULL, options = NULL)
Arguments
f Linear term (vector) of the objective function
intcon Vector of which variables are integer
A Inequality constraints (left-hand side)
b Inequality constraints (right-hand side)
Aeq Equality constraints (left-hand side)
beq Equality constraints (right-hand side)
lb Lower bound
ub Upper bound
x0 Initial solution
options Additional optimization parameters
Value
The solution vector in x as well as the objective value in fval.
Author(s)
<NAME>, <<EMAIL>>
Examples
# minimize 8x1 + x2
# subject to
# x1 + 2x2 >= -14
# -4x1 - 1x2 <= -33
# 2x1 + x2 <= 20
# x1, x2 integer
f <- c(8, 1)
A <- matrix(c(-1, -2, -4, -1, 2, 1), nrow=3, byrow=TRUE)
b <- c(14, -33, 20)
sol <- intlinprog(f, c(1, 2), A, b)
sol <- intlinprog(f, NULL, A, b)
sol$x
linprog MatLab(R)-style Linear Programming in R using ROI
Description
linprog provides a simple interface to ROI using the optimization model specification of Mat-
Lab(R)
minimize in x: f’*x subject to: A*x <= b subject to: Aeq*x == beq x >= lb x <= ub
Usage
linprog(f, A = NULL, b = NULL, Aeq = NULL, beq = NULL, lb = NULL,
ub = NULL, x0 = NULL, options = NULL)
Arguments
f Linear term (vector) of the objective function
A Inequality constraints (left-hand side)
b Inequality constraints (right-hand side)
Aeq Equality constraints (left-hand side)
beq Equality constraints (right-hand side)
lb Lower bound
ub Upper bound
x0 Initial solution
options Additional optimization parameters
Value
The solution vector in x as well as the objective value in fval.
Author(s)
<NAME>, <<EMAIL>>
Examples
# maximize: 2x1 + x2
# subject to:
# x1 + x2 <= 5
# x1 <= 3
# x1 >= 0, x2 >= 0
f <- c(2, 1)
A <- matrix(c(1, 1, 1, 0), nrow=2, byrow=TRUE)
b <- c(5, 3)
sol <- linprog(-f, A, b)
sol$x
quadprog MatLab(R)-style Quadratic Programming in R using ROI
Description
quadprog provides a simple interface to ROI using the optimization model specification of Mat-
Lab(R)
minimize in x: f’*x + 0.5*x’*H*x subject to: A*x <= b Aeq*x == beq x >= lb x <= ub
Usage
quadprog(H, f, A = NULL, b = NULL, Aeq = NULL, beq = NULL,
lb = NULL, ub = NULL, x0 = NULL, options = NULL)
Arguments
H Quadratic term (matrix) of the objective function
f Linear term (vector) of the objective function
A Inequality constraints (left-hand side)
b Inequality constraints (right-hand side)
Aeq Equality constraints (left-hand side)
beq Equality constraints (right-hand side)
lb Lower bound
ub Upper bound
x0 Initial solution
options Additional optimization parameters
Value
The solution vector in x as well as the objective value in fval.
Author(s)
<NAME>, <<EMAIL>>
Examples
# Covariance matrix of four stocks (weekly returns from 2011):
#
# AAPL IBM MSFT ORCL
# AAPL 0.0014708114 0.0006940036 0.0006720841 0.0008276391
# IBM 0.0006940036 0.0009643581 0.0006239411 0.0011266429
# MSFT 0.0006720841 0.0006239411 0.0009387707 0.0008728736
# ORCL 0.0008276391 0.0011266429 0.0008728736 0.0021489512
covariance = matrix(c(0.0014708114, 0.0006940036, 0.0006720841, 0.0008276391,
0.0006940036, 0.0009643581, 0.0006239411, 0.0011266429,
0.0006720841, 0.0006239411, 0.0009387707, 0.0008728736,
0.0008276391, 0.0011266429, 0.0008728736, 0.0021489512),
nrow=4, byrow=TRUE)
assets <- dim(covariance)[1]
H <- covariance
f <- rep(0, assets)
Aeq <- rep(1, assets)
beq <- 1
lb <- rep(0, assets)
ub <- rep(1, assets)
solution <- quadprog(H, f, NULL, NULL, Aeq, beq, lb, ub)
portfolio <- solution$x
print(portfolio) |
hybridfutex | rust | Rust | Crate hybridfutex
===
HybridFutex
---



`HybridFutex` is a Rust library that provides a synchronization primitive that allows threads to wait for a notification from another thread. It is designed to be low-overhead and scalable and supports both synchronous and asynchronous waiting and notification.
### Features
* Support Notify many operations on any target.
* Support for synchronous and asynchronous waiting.
* Built-in fairness and scalability for high-contention scenarios.
* Efficient kernel-assisted blocking using park API.
* Low-latency waiting and notification on both Windows and Unix platforms.
* Cross-platform compatibility with no external dependencies.
* Simple and easy-to-use API.
### Usage
To use `HybridFutex`, simply add it as a dependency in your `Cargo.toml` file:
```
[dependencies]
hybrid-futex = "0.1"
```
Then, you can use `HybridFutex` in your Rust code as follows:
```
use std::sync::Arc;
use std::thread;
use std::time::Duration;
use hybridfutex::HybridFutex;
let wait_queue = Arc::new(HybridFutex::new());
let wait_queue_clone = wait_queue.clone();
// Spawn a thread that waits for a notification from another thread let handle = thread::spawn(move || {
println!("Thread 1 is waiting");
wait_queue_clone.wait_sync();
println!("Thread 1 is notified");
});
// Wait for a short time before notifying the other thread thread::sleep(Duration::from_millis(100));
// Notify the other thread wait_queue.notify_one();
// Wait for the other thread to finish handle.join().unwrap();
```
### License
This project is licensed under the MIT License - see the LICENSE file for details.
### Acknowledgments
* This library is inspired by the futex system-call in Linux.
Structs
---
* HybridFutexA HybridFutex is a synchronization primitive that allows threads to wait for a notification from another thread. The HybridFutex maintains a counter that represents the number of waiters, and a queue of waiters. The counter is incremented when a thread calls `wait_sync` or `wait_async` methods, and decremented when a thread calls `notify_one` or `notify_many` methods.
A thread calling `wait_sync` or `wait_async` is blocked until it is notified by another thread calling `notify_one` or `notify_many`.
* WaitFutureA future representing a thread that is waiting for a notification from another thread using a `HybridFutex` synchronization primitive.
Crate hybridfutex
===
HybridFutex
---



`HybridFutex` is a Rust library that provides a synchronization primitive that allows threads to wait for a notification from another thread. It is designed to be low-overhead and scalable and supports both synchronous and asynchronous waiting and notification.
### Features
* Support Notify many operations on any target.
* Support for synchronous and asynchronous waiting.
* Built-in fairness and scalability for high-contention scenarios.
* Efficient kernel-assisted blocking using park API.
* Low-latency waiting and notification on both Windows and Unix platforms.
* Cross-platform compatibility with no external dependencies.
* Simple and easy-to-use API.
### Usage
To use `HybridFutex`, simply add it as a dependency in your `Cargo.toml` file:
```
[dependencies]
hybrid-futex = "0.1"
```
Then, you can use `HybridFutex` in your Rust code as follows:
```
use std::sync::Arc;
use std::thread;
use std::time::Duration;
use hybridfutex::HybridFutex;
let wait_queue = Arc::new(HybridFutex::new());
let wait_queue_clone = wait_queue.clone();
// Spawn a thread that waits for a notification from another thread let handle = thread::spawn(move || {
println!("Thread 1 is waiting");
wait_queue_clone.wait_sync();
println!("Thread 1 is notified");
});
// Wait for a short time before notifying the other thread thread::sleep(Duration::from_millis(100));
// Notify the other thread wait_queue.notify_one();
// Wait for the other thread to finish handle.join().unwrap();
```
### License
This project is licensed under the MIT License - see the LICENSE file for details.
### Acknowledgments
* This library is inspired by the futex system-call in Linux.
Structs
---
* HybridFutexA HybridFutex is a synchronization primitive that allows threads to wait for a notification from another thread. The HybridFutex maintains a counter that represents the number of waiters, and a queue of waiters. The counter is incremented when a thread calls `wait_sync` or `wait_async` methods, and decremented when a thread calls `notify_one` or `notify_many` methods.
A thread calling `wait_sync` or `wait_async` is blocked until it is notified by another thread calling `notify_one` or `notify_many`.
* WaitFutureA future representing a thread that is waiting for a notification from another thread using a `HybridFutex` synchronization primitive.
Struct hybridfutex::HybridFutex
===
```
pub struct HybridFutex { /* private fields */ }
```
A HybridFutex is a synchronization primitive that allows threads to wait for a notification from another thread. The HybridFutex maintains a counter that represents the number of waiters, and a queue of waiters. The counter is incremented when a thread calls `wait_sync` or `wait_async` methods, and decremented when a thread calls `notify_one` or `notify_many` methods.
A thread calling `wait_sync` or `wait_async` is blocked until it is notified by another thread calling `notify_one` or `notify_many`.
Examples
---
```
use std::sync::Arc;
use std::thread;
use std::time::Duration;
use hybridfutex::HybridFutex;
let wait_queue = Arc::new(HybridFutex::new());
let wait_queue_clone = wait_queue.clone();
// Spawn a thread that waits for a notification from another thread let handle = thread::spawn(move || {
println!("Thread 1 is waiting");
wait_queue_clone.wait_sync();
println!("Thread 1 is notified");
});
// Wait for a short time before notifying the other thread thread::sleep(Duration::from_millis(100));
// Notify the other thread wait_queue.notify_one();
// Wait for the other thread to finish handle.join().unwrap();
```
Implementations
---
### impl HybridFutex
#### pub fn new() -> Self
Creates a new HybridFutex with an initial counter of 0 and an empty queue of waiters.
##### Examples
```
use hybridfutex::HybridFutex;
let wait_queue = HybridFutex::new();
```
#### pub fn get_counter(&self) -> isize
Returns the current value of the counter of this HybridFutex.
##### Examples
```
use hybridfutex::HybridFutex;
let wait_queue = HybridFutex::new();
assert_eq!(wait_queue.get_counter(), 0);
```
#### pub fn wait_sync(&self)
Blocks the current thread until it is notified by another thread using the `notify_one` or `notify_many` method. The method increments the counter of the HybridFutex to indicate that the current thread is waiting. If the counter is already negative, the method does not block the thread and immediately returns.
##### Examples
```
use std::sync::Arc;
use std::thread;
use std::time::Duration;
use hybridfutex::HybridFutex;
let wait_queue = Arc::new(HybridFutex::new());
let wait_queue_clone = wait_queue.clone();
// Spawn a thread that waits for a notification from another thread let handle = thread::spawn(move || {
println!("Thread 1 is waiting");
wait_queue_clone.wait_sync();
println!("Thread 1 is notified");
});
// Wait for a short time before notifying the other thread thread::sleep(Duration::from_millis(100));
// Notify the other thread wait_queue.notify_one();
// Wait for the other thread to finish handle.join().unwrap();
```
#### pub fn wait_async(&self) -> WaitFuture<'_Returns a `WaitFuture` that represents a future that resolves when the current thread is notified by another thread using the `notify_one` or
`notify_many` method. The method increments the counter of the HybridFutex to indicate that the current thread is waiting.
If the counter is already negative, the future immediately resolves.
##### Examples
```
use std::sync::Arc;
use std::thread;
use std::time::Duration;
use hybridfutex::HybridFutex;
use futures::executor::block_on;
let wait_queue = Arc::new(HybridFutex::new());
// Spawn a thread that waits for a notification from another thread let wqc = wait_queue.clone();
let handle = thread::spawn(move || {
let fut = wqc.wait_async();
let _ = block_on(fut);
println!("Thread 1 is notified");
});
// Wait for a short time before notifying the other thread thread::sleep(Duration::from_millis(100));
// Notify the other thread wait_queue.notify_one();
// Wait for the other thread to finish handle.join().unwrap();
```
#### pub fn notify_one(&self)
Notifies one waiting thread that is waiting on this HybridFutex using the `wait_sync` or `wait_async` method. If there is no current waiting threads, this function call indirectly notifies future call to
`wait_sync` or `wait_async` using the internal counter.
##### Examples
```
use std::sync::Arc;
use std::thread;
use std::time::Duration;
use hybridfutex::HybridFutex;
let wait_queue = Arc::new(HybridFutex::new());
let wait_queue_clone = wait_queue.clone();
// Spawn a thread that waits for a notification from another thread let handle = thread::spawn(move || {
println!("Thread 1 is waiting");
wait_queue_clone.wait_sync();
println!("Thread 1 is notified");
});
// Wait for a short time before notifying the other thread thread::sleep(Duration::from_millis(100));
// Notify the other thread wait_queue.notify_one();
// Wait for the other thread to finish handle.join().unwrap();
```
#### pub fn notify_many(&self, count: usize)
Notifies a specified number of waiting threads that are waiting on this HybridFutex using the `wait_sync` or `wait_async` method. If there are less waiting threads than provided count, it indirectly notifies futures calls to to `wait_sync` and `wait_async` using the internal counter.
##### Examples
```
use std::sync::Arc;
use std::thread;
use std::time::Duration;
use hybridfutex::HybridFutex;
let wait_queue = Arc::new(HybridFutex::new());
// Spawn multiple threads that wait for a notification from another thread let handles: Vec<_> = (0..3).map(|i| {
let wait_queue_clone = wait_queue.clone();
thread::spawn(move || {
println!("Thread {} is waiting", i);
wait_queue_clone.wait_sync();
println!("Thread {} is notified", i);
})
}).collect();
// Wait for a short time before notifying the threads thread::sleep(Duration::from_millis(100));
// Notify two threads wait_queue.notify_many(2);
// Notify single thread wait_queue.notify_one();
// Wait for the other threads to finish for handle in handles {
handle.join().unwrap();
}
```
Trait Implementations
---
### impl Debug for HybridFutex
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Self
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for HybridFutex
### impl Send for HybridFutex
### impl Sync for HybridFutex
### impl Unpin for HybridFutex
### impl !UnwindSafe for HybridFutex
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"WaitFuture<'_>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.WaitFuture.html\" title=\"struct hybridfutex::WaitFuture\">WaitFuture</a><'a></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html\" title=\"trait core::future::future::Future\">Future</a> for <a class=\"struct\" href=\"struct.WaitFuture.html\" title=\"struct hybridfutex::WaitFuture\">WaitFuture</a><'a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html#associatedtype.Output\" class=\"associatedtype\">Output</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.unit.html\">()</a>;</span>"}
Struct hybridfutex::WaitFuture
===
```
pub struct WaitFuture<'a> { /* private fields */ }
```
A future representing a thread that is waiting for a notification from another thread using a `HybridFutex` synchronization primitive.
Trait Implementations
---
### impl<'a> Debug for WaitFuture<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Drops the future, checking whether it has been polled before and panicking if it has not. This is to prevent potential memory leaks if the future is dropped before being polled.
### impl<'a> Future for WaitFuture<'a#### fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()Polls the future, returning `Poll::Pending` if the future is still waiting for a notification, and `Poll::Ready(())` if the future has been notified.
If the future has not yet been polled, this method increments the counter of the `HybridFutex` that the future is waiting on to indicate that the current thread is waiting. If the counter is already negative, the future immediately resolves and returns `Poll::Ready(())`. Otherwise, the method pushes a new `AsyncWaiter` onto the queue of waiters for the `HybridFutex`,
and returns `Poll::Pending`.
If the future has already been polled and the value of the `state` field is
`1`, this method simply returns `Poll::Pending` without modifying the state or the queue of waiters.
If the future has already been notified and the value of the `state` field is `!0`, this method returns `Poll::Ready(())` without modifying the state or the queue of waiters.
#### type Output = ()
The type of value produced on completion.Auto Trait Implementations
---
### impl<'a> RefUnwindSafe for WaitFuture<'a### impl<'a> Send for WaitFuture<'a### impl<'a> Sync for WaitFuture<'a### impl<'a> Unpin for WaitFuture<'a### impl<'a> UnwindSafe for WaitFuture<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<F> IntoFuture for Fwhere
F: Future,
#### type Output = <F as Future>::Output
The output that the future will produce on completion.#### type IntoFuture = F
Which kind of future are we turning this into?#### fn into_future(self) -> <F as IntoFuture>::IntoFuture
Creates a future from a value.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. |
HiResTEC | cran | R | Package ‘HiResTEC’
February 22, 2023
Type Package
Title Non-Targeted Fluxomics on High-Resolution Mass-Spectrometry Data
Version 0.62
Date 2023-02-22
Maintainer <NAME> <<EMAIL>>
Description Identifying labeled compounds in a 13C-tracer experiment in
non-targeted fashion is a cumbersome process. This package facilitates
such type of analyses by providing high level quality control plots,
deconvoluting and evaluating spectra and performing a multitude of
tests in an automatic fashion. The main idea is to use changing
intensity ratios of ion pairs from peak list generated with 'xcms' as
candidates and evaluate those against base peak chromatograms and
spectra information within the raw measurement data automatically.
The functionality is described in Hoffmann et al. (2018)
<doi:10.1021/acs.analchem.8b00356>.
License GPL-3
URL https://CRAN.R-project.org/package=HiResTEC
Depends R (>= 2.10.0)
Imports beeswarm, CorMID, InterpretMSSpectrum, openxlsx, plyr
biocViews
Encoding UTF-8
LazyData TRUE
RoxygenNote 7.2.3
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [aut]
Repository CRAN
Date/Publication 2023-02-22 14:40:12 UTC
R topics documented:
DeconvoluteSpectru... 2
EvaluateCandidateListAgainstRawDat... 3
EvaluatePairsFromXCMSSe... 5
GenerateCandXLS... 6
GenerateQCPlot... 7
getMultipleBP... 8
mz_shift_correcto... 9
plotBP... 9
plotMI... 10
res_lis... 11
sa... 12
xcms_can... 12
DeconvoluteSpectrum DeconvoluteSpectrum.
Description
DeconvoluteSpectrum will evaluate a list of xcmsRaw objects at a given time (rt) and potential
mass (mz1). The main purpose is to deconvolute the mass spectrum at rt including mz1.
Usage
DeconvoluteSpectrum(
dat = NULL,
rt = NULL,
rt_dev = 3,
mz1 = NULL,
mz_dev = 0.003,
use.mz.adjust = FALSE,
ionization = c("APCI", "ESI")[1],
smooth = 0
)
Arguments
dat A list of xcmsRaws or an xcmsSet object.
rt Retention time to search for maxima.
rt_dev Allowed retention time window.
mz1 If specified, ensure that this mass is included in the spectrum (assumed base
peak). NULL otherwise.
mz_dev Allowed mz deviation [Da].
use.mz.adjust Will adjust mz on an experiment wide basis.
ionization Either APCI or ESI. Choice will modify some internal parameters and checks
performed.
smooth Smoothing parameter passed on to getMultipleBPC.
Details
Will test all mz at spectrum of base peak within range for co-apex, rt diff and ratio consistency/correlation
over a set of samples.
Value
A pseudo spectrum at rt (containing mz1 if specified). Effectively a 2-column matrix (mz, int) with
rt as attribute
Examples
# Please use examples from previous versions as xcms (and xcms objects)
# are no longer supported during CRAN checks leading to package rejection
# if included (and I do not know a work around). :(
EvaluateCandidateListAgainstRawData
EvaluateCandidateListAgainstRawData.
Description
EvaluateCandidateListAgainstRawData will analyze an xcmsSet result for mass pairs (mz1,
mz2) with changes due to 13C incorporation.
Usage
EvaluateCandidateListAgainstRawData(
x = NULL,
tp = NULL,
gr = NULL,
dat = NULL,
dmz = 0.025,
drt = 1,
dEcut = 1,
Pcut = 0.01,
Icut = 1000,
method = c("APCI", "ESI")[1],
rolp = c("non", "pos", "neg", "all")[2],
smooth = 0
)
Arguments
x Dataframe of results (output of EvaluatePairsFromXCMSet).
tp Timepoint.
gr group, e.g. different genotypes or concentrations.
dat list of xcmsRaw’s for deconvolution and plotting.
dmz Allowed mass deviation in Da (for BPC extraction).
drt Allowed rt deviation in seconds (for get extraction).
dEcut Minimum required change in enrichment before a candidate ID is assigned.
Pcut Maximum allowed P value before a candidate ID is assigned.
Icut Minimum required median peak intensity before a candidate ID is assigned.
method Either APCI or ESI. Choice will modify some internal parameters and checks
performed.
rolp RemoveOverLappingPeaks paramter.
smooth Smoothing parameter passed to getMultipleBPC.
Details
This function will evaluate candidate mz pairs found within an xcmsSet object by EvaluatePairs-
FromXCMSSet against the raw measurement data. A special parameter is ’rolp’ which can be set
to ’non’, ’pos’, ’neg’ or ’all’. It will influence the time performance of the function be determin-
ing how many peaks are effectively tested. If ’rolp’ is set to ’non’, no overlapping peaks will be
skipped, every individual mz-pair will be sequentially evaluated (slow but most informative). If it
is set to ’pos’ or ’neg’, overlapping peaks (determined by experiment wide deconvolution) will not
be tested aditionally for positive or negative hits (’neg’ is standard). If set to ’all’ overlapping peaks
will always be removed from the list of mz-pairs to be tested (fast).
Value
A list of evaluation results.
Examples
# Please use examples from previous versions as xcms (and xcms objects)
# are no longer supported during CRAN checks leading to package rejection
# if included (and I do not know a work around). :(
EvaluatePairsFromXCMSSet
EvaluatePairsFromXCMSSet.
Description
EvaluatePairsFromXCMSSet will analyze an xcmsSet result for mass pairs (mz1, mz2) with changes
due to any 13C incorporation.
Usage
EvaluatePairsFromXCMSSet(
xg = NULL,
tp = NULL,
gr = NULL,
drt = 1,
dmz = 0.025,
mz_iso = 1.00335,
n = 6,
method = c("APCI", "ESI")[1],
specific_row = NULL,
testing = FALSE,
silent = FALSE
)
Arguments
xg xcmsSet object with group information.
tp Timepoint information for all samples (obviously required, internally converted
to factor).
gr Group information for all samples, e.g. different genotypes or concentrations
(optional, factor).
drt Allowed rt deviation in time units of xcmsSet (usually seconds) to test for can-
didates.
dmz Allowed mass deviation in Da.
mz_iso Mass defect of the isotope under investigation.
n Number of maximal incorporated carbons to test.
method Currently APCI or ESI. If APCI, dmz will be modified depending on n (see
details).
specific_row A single row from groupval(xg) to process.
testing Stop in function using browser() if specific_row is specified; can be a isotope
number, i.e. 3 will stop at third isotope.
silent Suppress warnings and console output if TRUE.
Details
Using ’APCI’ as method assumes that (i) you analyze TMS-derivatized compounds and (ii) your
MS resolution does not allow to seperate Si and C isotopes but reportes an intermediate mass as
m/z. In this case you will find carbon isotopes below there expected masses, i.e. M+1 would be
1.001mDa apart from M+0 instead of 1.003. The effect is increased with isotope number, i.e. M+6
will be ~20mDa below the expected value. Hence, selecting method ’APCI’ will combine your
selected dmz with a allowed deviation due to Si-isotope caused mass shifts. Use ’ESI’ if you are
not sure if this effect takes place in your settings.
Value
A dataframe with all observable pairs within the provided xcmsSet peak list including mean group
intensities and P values.
Examples
# Please use examples from previous versions as xcms (and xcms objects) are
# no longer supported during CRAN checks leading to package rejection
# if included (and I do not know a work around).
## Not run:
load(xcms_cand)
head(xcms_cand[order(xcms_cand$P), ])
## End(Not run)
GenerateCandXLSX GenerateCandXLSX.
Description
GenerateCandXLSX will produce a XLSX of a list containing test results objects.
Usage
GenerateCandXLSX(res_list = NULL, xlsx_file = NULL, rejected = FALSE)
Arguments
res_list A list of result objects (each testing an individual mz pair).
xlsx_file File name.
rejected Logical. Return rejected if TRUE.
Details
Not yet.
Value
Candidate table as data.frame.
Examples
# load evaluation result of example data
data(res_list)
# generate table within R (use xlsx_file to write to file)
str(GenerateCandXLSX(res_list))
GenerateCandXLSX(res_list)[, 1:5]
GenerateQCPlots GenerateQCPlots.
Description
GenerateQCPlots will produce QC plots for a list containing test results objects.
Usage
GenerateQCPlots(
res_list = NULL,
pdf_file = NULL,
mfrow = NULL,
skip_plots = NULL
)
Arguments
res_list A list of result objects (each testing an individual mz pair).
pdf_file Either APCI or ESI. Choice will modify some internal parameters and checks
performed.
mfrow If NULL automatically determined, otherwise useful to specify a layout.
skip_plots NULL or numeric vector in which case plots with numbers in skip_plots will be
empty.
Details
For individual candidates screen output is reasonable, otherwise a target PDF file should be speci-
fied.
Value
NULL.
Examples
# load evaluation result of example data
data(res_list)
# generate Figures on screen (use PDF output for mass evaluation)
GenerateQCPlots(res_list[1])
getMultipleBPC getMultipleBPC.
Description
getMultipleBPC will extract multiple BPCs from an xcmsRaw object for a vector of mz within the
limits given by rt, rt_dev and mz_dev.
Usage
getMultipleBPC(
x,
mz = NULL,
mz_dev = 0.005,
rt = NULL,
rt_dev = 2,
zeroVal = NA,
smooth = 0,
returnEIC = FALSE
)
Arguments
x xcmsRaw object.
mz mass vector.
mz_dev allowed deviations (can be a single numeric, a vector, a matrix with one row
(lower bound, upper bound) or a matrix with length(mz) rows giving lower
and upper bound for each mz).
rt target timepoint.
rt_dev allowed window.
zeroVal Set values <=0 to NA or keep as is with NULL.
smooth Window size for moving average smoother, 0 = no smoothing.
returnEIC Return EIC instead of BPC?
Details
While there are other functions to extract BPC information from raw data, this one is particularly
useful to get all traces belonging to a isotopologue group. It will attach several derived values to
the results object, i.e. describing the observed mass shift (deviation from expected value) which is
helpful in QC for non-targeted tracer analyses.
Value
A matrix with scan wise (rows) intensities for all requested masses (columns) as either EIC or BPC.
References
Uses C code modified from XCMS (see citation("xcms")).
Examples
# see \link{plotMID} for an example
mz_shift_corrector Predefined mass search windows to be used internally.
Description
This is a list defining windows for high res APCI or ESI instrumentation..
Usage
data(mz_shift_corrector)
Format
An object of class list of length 3.
Source
<EMAIL>
plotBPC plotBPC.
Description
plotBPC will plot for each item of a list of result-ojects from getMultipleBPC the BPC traces and
the spectrum at the scan where the summed intensity of all ions is max.
Usage
plotBPC(
bpc = NULL,
mfrow = NULL,
skip_plots = NULL,
ylim = NULL,
col = NULL,
ids = NULL,
type = "both"
)
Arguments
bpc A bpc object (list of intensity matrixes, rt x mz, including several attributes as
attached by getMultipleBPC).
mfrow Specify mfrow explicitely (is optimized internally if NULL to cover n=length(bpc)).
skip_plots Allows to block certain subplots in the mfrow matrix to bettern align replicates.
ylim Can be specified specifically, will be adjusted to overall min/max otherwise.
col Specific color vector for masses may be provided.
ids Specific plot ids may be explicitely provided.
type Switch between co-plot of BPC and Spectrum ("both") or BPC alone ("bpc").
Details
not yet
Value
A plot to the graphics device and NULL as invisible.
Examples
# load example raw data
data(res_list)
plotBPC(bpc = res_list[[1]][["bpc"]][c(1:2, 13:14)])
plotMID plotMID.
Description
plotMID will plot a Mass Isotopomer Distribution (MID) as calculated by CorMID.
Usage
plotMID(
mid = NULL,
gr = NULL,
name = "unknown",
contr = NULL,
stackedbars = FALSE,
subplot_ylim = 100,
...
)
Arguments
mid Matrix of measured ion intensities corrected using CorMID.
gr Groups, a factor.
name Name of metabolite.
contr Contrasts. Not yet clear if useful.
stackedbars Alternative plotting layout using stacked bar plot.
subplot_ylim Calculate ylim individually per subplot if 0, show full range in all subplots if
100 and limit to the minimal specified number otherwise.
... Further arguments to ’boxplot’ or ’barplot’ (depending on ’stackedbars’).
Details
Not yet.
Value
NULL.
Examples
mid <- matrix(c(seq(0, 0.3, 0.1), seq(1, 0.7, -0.1)), byrow = TRUE, nrow = 2)
gr <- gl(2, 2, labels = letters[1:2])
plotMID(mid = mid, gr = gr, name = "Metabolite X")
plotMID(mid = mid, gr = gr, stackedbars = TRUE, las = 1, xlab = "MID", legend.text = c("x", "y"))
rownames(mid) <- paste0("M", 0:1)
lt <- paste0("M", 0:1)
plotMID(mid = mid, gr = gr, stackedbars = TRUE, las = 1, xlab = "MID", legend.text = lt)
plotMID(mid = mid[, 2, drop = FALSE], stackedbars = TRUE, col = c(3, 4))
colnames(mid) <- paste0("S", 1:4)
gr2 <- gl(n = 1, k = 1, labels = "bla")
plotMID(mid = mid[, 2, drop = FALSE], gr = gr2, stackedbars = TRUE, name = NULL)
res_list The main results object of a non-targeted search for tracer incorpora-
tion.
Description
This is a list containing the evaluations results established based on processing example data with
EvaluateCandidateListAgainstRawData.
Usage
data(res_list)
Format
An object of class list of length 14.
Source
<EMAIL>
sam Sample table
Description
This data frame contains the sample definition of 18 samples from a larger experiment.
Usage
data(sam)
Format
An object of class data.frame with 18 rows and 8 columns.
Source
<EMAIL>
xcms_cand Dataframe with putative candidates
Description
This data frame contains the analysis result of an xcmsSet which can not be provided via CRAN
anymore using EvaluatePairsFromXCMSSet with respect to interesting m/z-pairs.
Usage
data(xcms_cand)
Format
An object of class data.frame with 72 rows and 19 columns.
Source
<EMAIL> |
LOMAR | cran | R | Package ‘LOMAR’
October 12, 2022
Type Package
Title Localization Microscopy Data Analysis
Version 0.2.1
Maintainer <NAME> <<EMAIL>>
Description Read, register and compare point sets from single molecule localization microscopy.
URL https://git.embl.de/heriche/lomar
Depends R (>= 3.6.0)
biocViews
Imports Rcpp, FNN, stats, data.table, parallel, doParallel, foreach,
proxy, reshape2, pracma, transport, RANN, ff, aws, dbscan,
EBImage
LinkingTo BH (>= 1.78.0-0), Rcpp
Suggests testthat
License GPL-3
Encoding UTF-8
ByteCompile true
RoxygenNote 7.1.2
SystemRequirements C++14, gmp, fftw3
NeedsCompilation yes
Author <NAME> [cre, aut] (<https://orcid.org/0000-0001-6867-9425>)
Repository CRAN
Date/Publication 2022-03-16 11:10:04 UTC
R topics documented:
apply_transformatio... 2
ary2p... 3
circle_hough_transfor... 3
costW... 4
cp... 5
crop_point_se... 7
downsampl... 7
find_elbo... 8
Gaussian_W... 8
get_kernel_matri... 9
get_persistence_diagram... 10
GMM_W... 11
gradientW... 12
ic... 12
idx2rowco... 14
img2p... 14
jrmp... 15
local_densitie... 17
locprec2co... 18
locs2p... 18
locs_from_cs... 19
points2im... 20
points_from_ro... 21
point_sets_from_loc... 21
point_sets_from_tiff... 23
ps2ar... 24
pss... 24
q2d... 25
q2... 25
restore_coordinate... 26
rot... 26
rot... 27
rot... 27
sliced_W... 28
standardize_coordinate... 28
t... 29
wgmmre... 29
apply_transformation apply_transformation
Description
Apply rotation and translation to a point set
Usage
apply_transformation(X, R, t, s)
Arguments
X a point set as an N x D matrix
R D x D rotation matrix
t 1 x D translation vector
s scaling factor
Value
transformed point set as a N x D matrix
ary2ps ary2ps
Description
Convert a 4d array to a list of 3d point sets. The points are formed by extracting the coordinates of
array values strictly above the given cut-off (default 0).
Usage
ary2ps(ary, bkg = 0)
Arguments
ary a 4d array with last dimension indexing instances.
bkg Extract points for array values strictly above this (default = 0)
Value
a list of point sets.
circle_hough_transform
Circle Hough transform
Description
Extract coordinates of the centres of circles from a 2D image using the Hough transform
Usage
circle_hough_transform(
pixels,
rmin,
rmax,
threshold,
resolution = 360,
ncpu = 1
)
Arguments
pixels input data, either a matrix representing a 2D image or a data frame of signal
coordinates with columns x, y. For images, background is expected to be 0 and
signal to have positive values.
rmin minimum search radius.
rmax maximum search radius.
threshold score threshold between 0 and 1.
resolution number of steps in the circle transform (default: 360). This represents the max-
imum number of votes a point can get.
ncpu number of threads to use to speed up computation (default: 1)
Value
a data frame with columns x, y, r and score
Examples
point.set <- data.frame(x = c(-9.8,-5.2,12.5,2.5,4.5,1.3,-0.2,0.4,9.3,-1.4,0.5,-1.1,-7.7),
y = c(-4.2,1.5,-0.5,12,-3,-7.2,10.9,6.7,-1.3,10,6.7,-6.2,2.9))
circles <- circle_hough_transform(pixels = point.set, rmin = 3, rmax = 6, resolution = 100,
threshold = 0.1, ncpu = 1)
costWd costWd
Description
Objective function to minimize when using GMMs
Usage
costWd(Tr, X, Y, CX, CY, w1 = NULL, w2 = NULL, S = NULL)
Arguments
Tr Transformation vector as translation vector + rotation (angle in 2d, quaternion
in 3d))
X matrix of means of first GMM (i.e. reference point set)
Y matrix of means of second GMM (i.e. moving point set)
CX array of covariance matrices of first GMM such that X[i,] has covariance matrix
CX[„i]
CY array of covariance matrices of second GMM such that Y[i,] has covariance
matrix CY[„i]
w1 (optional) vector of mixture weights of first GMM.
w2 (optional) vector of mixture weights of second GMM.
S (optional) array of pre-computed sqrtm(sqrtm(CX[„i]) %*% CY[„j] %*% sqrtm(CX[„i]))
Value
cost value
cpd cpd
Description
Affine and rigid registration of two point sets using the coherent point drift algorithm. See: Myro-
<NAME>., <NAME>. (2010): "Point-Set Registration: Coherent Point Drift", IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol. 32, issue 12, pp. 2262-2275.
Usage
cpd(
X,
Y,
w = 0,
weights = NULL,
scale = FALSE,
maxIter = 100,
subsample = NULL,
tol = 1e-04
)
Arguments
X reference point set, a N x D matrix
Y point set to transform, a M x D matrix,
w noise weight in the range [0, 1)
weights a M x N matrix of point correspondence weights
scale logical (default: FALSE), whether to use scaling
maxIter maximum number of iterations to perform (default: 100)
subsample if set, use this randomly selected fraction of the points
tol tolerance for determining convergence
Value
a list of
• Y: transformed point set,
• R: rotation matrix,
• t: translation vector,
• s: scaling factor,
• P: matrix of correspondence probabilities between the two point sets,
• sigma: final variance,
• iter: number of iterations performed,
• converged: boolean, whether the algorithm has converged.
Examples
data.file1 <- system.file("test_data", "parasaurolophusA.txt", package = "LOMAR",
mustWork = TRUE)
PS1 <- read.csv(data.file1, sep = '\t', header = FALSE)
data.file2 <- system.file("test_data", "parasaurolophusB.txt", package = "LOMAR",
mustWork = TRUE)
PS2 <- read.csv(data.file2, sep = '\t', header = FALSE)
transformation <- cpd(PS1, PS2, maxIter = 10, tol = 1e-3)
## Not run:
# Visualize registration outcome
library(rgl)
plot3d(PS1, col = "blue")
points3d(PS2, col = "green")
points3d(transformation[['Y']], col = "magenta")
## End(Not run)
crop_point_set crop_point_set
Description
Retain points in the set that are within the given distance from the geometric median of the set.
Using the geometric median is more robust than using the centre of mass (i.e. mean).
Usage
crop_point_set(point.set, size)
Arguments
point.set a point set as a matrix with columns x,y,z.
size vector of distances from the geometric median of the points along each axis.
Points are discarded if they are outside the ellipsoid defined by size and centred
on the geometric median of the points.
Value
point set as a matrix with columns x,y,z.
downsample downsample
Description
Weighted downsampling of a point set. If point weights are not provided, they are computed to be
proportional to the local density around each point.
Usage
downsample(point.set, n = NULL, k = NULL, weights = NULL)
Arguments
point.set a point set
n integer, sample size.
k integer, number of nearest neighbours to consider to estimate local density
weights a vector of probability weights
Value
a point set
find_elbow find_elbow
Description
Find elbow in a 2D curve represented by a list of ordered values
Usage
find_elbow(values)
Arguments
values vector of values in decreasing order
Details
This function finds the point with maximum distance from the line between the first and last points.
Adapted from StackOverflow: http://stackoverflow.com/questions/2018178/finding-the-best-trade-
off-point-on-a-curve
Value
index and value of the selected point
Gaussian_Wd Gaussian_Wd
Description
Compute 2-Wasserstein distance between two Gaussian distributions
Usage
Gaussian_Wd(m1, m2, S1, S2, S = NULL)
Arguments
m1 mean of first distribution
m2 mean of second distribution
S1 variance of first distribution
S2 variance of second distribution
S (optional) matrix of pre-computed sqrtm(sqrtm(S1) %*% S2 %*% sqrtm(S1))
Value
distance value
get_kernel_matrix get_kernel_matrix
Description
Compute kernel/distance matrix between persistence diagrams.
Usage
get_kernel_matrix(
Diag = NULL,
method = c("sWd", "pssk"),
dimensions = NULL,
return.dist = FALSE,
M = NULL,
sigma = NULL,
ncpu = 1
)
Arguments
Diag list of persistence diagrams as n x 3 matrices
method which kernel or distance to compute. One of sWd (for sliced Wasserstein kernel)
or pssk (for the persistence scale-space kernel)
dimensions vector of the dimensions of the topological features to consider, if NULL (de-
fault) use all available dimensions
return.dist logical (default: FALSE) for method sWd, whether to return the sliced Wasser-
stein distance matrix instead of the kernel.
M number of slices for the sliced Wasserstein kernel
sigma kernel bandwidth
ncpu number of parallel threads to use for computation
Value
a matrix
Examples
PS <- list(data.frame(x = c(2.4,-6.9,4.6,-0.7,-3.3,-4.9,-3.5,-3.5,4.2,-7),
y = c(5.7,1.9,4.8,3.4,-3,-2.1,7.2,1.8,6.1,-1.6),
z = c(2.7,-0.1,-0.7,-0.6,0.4,-1.5,-0.6,-0.9,2.2,0.7)),
data.frame(x = c(0,0,3.1,-5.6,-5,-7.4,-0.7,-7.7,-6.7,4,4.2,0.2,5.8,3.9,3.9),
y = c(6.3,-6.1,-3.5,4.6,-4.1,0.3,8.8,-2.3,2.9,3.7,-1.4,-3.9,5.5,-1.2,-6.7),
z = c(-1.5,1.7,-0.4,-1.4,1.8,1.7,-0.9,-1.8,-0.5,1.7,1.3,0.5,-1.4,1.6,-0.1)),
data.frame(x = c(-9.8,-5.2,12.5,2.5,4.5,1.3,-0.2,0.4,9.3,-1.4,0.5,-1.1,-7.7),
y = c(-4.2,1.5,-0.5,12,-3,-7.2,10.9,6.7,-1.3,10,6.7,-6.2,2.9),
z = c(3.4,-3.8,-1.4,1.8,3.5,2.5,2.6,-4.8,-3.8,3.9,4.1,-3.6,-4)))
Dgs <- get_persistence_diagrams(point.sets = PS, maxdimension = 1, maxscale = 5, ncpu = 1)
K <- get_kernel_matrix(Diag = Dgs, method = 'sWd', dimensions = c(0,1), M = 10, sigma = 5)
get_persistence_diagrams
get_persistence_diagrams
Description
Compute persistence diagrams for a list of point sets. By default, compute persistent homology
from the Vietoris-Rips filtration. If use.dtm is TRUE, compute instead the persistent homology of
the sublevel set of the distance to measure evaluated over a grid.
Usage
get_persistence_diagrams(
point.sets = NULL,
maxdimension = NULL,
maxscale = NULL,
use.dtm = FALSE,
m0 = NULL,
grid.by = NULL,
ncpu = 1
)
Arguments
point.sets list of point sets, each as a data frame with columns x,y,z
maxdimension maximum dimension of the homological features to be computed
maxscale limit of the Vietoris-Rips filtration
use.dtm logical (default: FALSE), whether to use the distance to measure function
m0 parameter for the dtm function
grid.by vector of space between points of the grid for the dtm function along each di-
mension
ncpu number of parallel threads to use for computation
Value
a list of persistence diagrams as n x 3 matrices. Each row is a topological feature and the columns
are dimension, birth and death of the feature.
Examples
PS <- list(data.frame(x = c(2.4,-6.9,4.6,-0.7,-3.3,-4.9,-3.5,-3.5,4.2,-7),
y = c(5.7,1.9,4.8,3.4,-3,-2.1,7.2,1.8,6.1,-1.6),
z = c(2.7,-0.1,-0.7,-0.6,0.4,-1.5,-0.6,-0.9,2.2,0.7)),
data.frame(x = c(0,0,3.1,-5.6,-5,-7.4,-0.7,-7.7,-6.7,4,4.2,0.2,5.8,3.9,3.9),
y = c(6.3,-6.1,-3.5,4.6,-4.1,0.3,8.8,-2.3,2.9,3.7,-1.4,-3.9,5.5,-1.2,-6.7),
z = c(-1.5,1.7,-0.4,-1.4,1.8,1.7,-0.9,-1.8,-0.5,1.7,1.3,0.5,-1.4,1.6,-0.1)))
Diags <- get_persistence_diagrams(point.sets = PS, maxdimension = 1, maxscale = 5, ncpu = 1)
GMM_Wd GMM_Wd
Description
Compute 2-Wasserstein distance between two Gaussian mixture models See: <NAME>, Desolneux
A. (2019) A Wasserstein-type distance in the space of Gaussian Mixture Models. hal-02178204v2
Usage
GMM_Wd(m1, m2, S1, S2, w1 = NULL, w2 = NULL, S = NULL)
Arguments
m1 matrix of means of first GMM
m2 matrix of means of second GMM
S1 array of covariance matrices of first GMM such that m1[i,] has covariance matrix
S1[„i]
S2 array of covariance matrices of second GMM such that m2[i,] has covariance
matrix S2[„i]
w1 (optional) vector of mixture weights of first GMM.
w2 (optional) vector of mixture weights of second GMM.
S (optional) array of pre-computed sqrtm(sqrtm(S1[„i]) %*% S2[„j] %*% sqrtm(S1[„i]))
Value
list of distance value d and optimal transport matrix ot
gradientWd gradientWd
Description
Gradient of the objective function with respect to rotation and translation parameters
Usage
gradientWd(Tr, X, Y, CX, CY, w1 = NULL, w2 = NULL, S = NULL)
Arguments
Tr Transformation vector as translation vector + rotation (angle in 2d, quaternion
in 3d))
X matrix of means of first GMM (i.e. reference point set)
Y matrix of means of second GMM (i.e. moving point set)
CX array of covariance matrices of first GMM such that X[i,] has covariance matrix
C1[„i]
CY array of covariance matrices of second GMM such that Y[i,] has covariance
matrix C2[„i]
w1 (optional) vector of mixture weights of first GMM.
w2 (optional) vector of mixture weights of second GMM.
S (optional) array of pre-computed sqrtm(sqrtm(CX[„i]) %*% CY[„j] %*% sqrtm(CX[„i]))
Value
gradient vector
icp icp
Description
Rigid registration of two point sets using the iterative closest point algorithm.
Usage
icp(
X,
Y,
weights = NULL,
iterations = 100,
subsample = NULL,
scale = FALSE,
tol = 0.001
)
Arguments
X reference point set, a N x D matrix
Y point set to transform, a M x D matrix,
weights vector of length nrow(Y) containing weights for each point in Y. Not imple-
mented.
iterations number of iterations to perform (default: 100)
subsample if set, use this randomly selected fraction of the points
scale logical (default: FALSE), whether to use scaling.
tol tolerance for determining convergence
Value
a list of
• Y: transformed point set, a M x D matrix,
• R: rotation matrix,
• t: translation vector,
• s: scaling factor,
• iter: number of iterations performed,
• conv: boolean, whether the algorithm has converged.
Examples
data.file1 <- system.file("test_data", "parasaurolophusA.txt", package = "LOMAR",
mustWork = TRUE)
PS1 <- read.csv(data.file1, sep = '\t', header = FALSE)
data.file2 <- system.file("test_data", "parasaurolophusB.txt", package = "LOMAR",
mustWork = TRUE)
PS2 <- read.csv(data.file2, sep = '\t', header = FALSE)
transformation <- icp(PS1, PS2, iterations = 10, tol = 1e-3)
## Not run:
# Visualize registration outcome
library(rgl)
plot3d(PS1, col = "blue")
points3d(PS2, col = "green")
points3d(transformation[['Y']], col = "magenta")
## End(Not run)
idx2rowcol idx2rowcol
Description
Convert indices into a dist object to row, column coordinates of the corresponding distance matrix
Usage
idx2rowcol(idx, n)
Arguments
idx vector of indices
n size of the n x n distance matrix
Value
a matrix with two columns nr and nc
img2ps img2ps
Description
Read an image into a point set. The points are formed by extracting the coordinates of voxel values
strictly above the given cut-off (default 0).
Usage
img2ps(img = NULL, bkg = 0, crop.size = NULL)
Arguments
img either a 2d or 3d array or a path to a file containing a 2d or 3d image.
bkg Extract points for values strictly above this (default = 0).
crop.size vector (of length 2 or 3) containing the desired reduced size of the images along
each dimension, e.g. c(30,30,30).
Value
a point set as matrix with columns x,y[,z]
Examples
img.file <- system.file("test_data/img", "alien1_3d.tif", package = "LOMAR",
mustWork = TRUE)
point_set <- img2ps(img = img.file, bkg = 0)
jrmpc jrmpc
Description
Joint registration of multiple point sets See: <NAME>, <NAME>, <NAME>,
and<NAME>. A generative model for the joint registration of multiple point sets. In European
Conference on Computer Vision, pages 109–122. Springer, 2014
Usage
jrmpc(
V,
C = NULL,
K = NULL,
g = NULL,
initialPriors = NULL,
updatePriors = TRUE,
maxIter = 100,
fixedVarIter = 0,
tol = 0.01,
initializeBy = NULL,
model.selection = FALSE,
model.selection.threshold = NULL,
rotation.only = FALSE
)
Arguments
V list of point sets as N x D matrices
C (optional) list of arrays of covariance matrices with C[[j]][„i] the covariance
matrix associated with point i of set j.
K (optional) number of components of the GMM, defaults to the average number
of points in a set.
g (optional) proportion of noisy points, defaults to 1/K. If set, priors will be ini-
tialized uniformly.
initialPriors (optional) vector of length K of prior probabilities. Defaults to uniform dis-
tribution using g. If set, will determine g so it is an error to specify g with
initialPriors.
updatePriors logical, whether to update priors at each iteration (default: TRUE).
maxIter maximum number of iterations to perform (default: 100).
fixedVarIter number of iterations before starting variance updates
tol tolerance for determining convergence (default: 1e-2).
initializeBy (optional) how to initialize the GMM means. Defaults to distributing the means
on the surface of the sphere enclosing all (centred) sets. Currently supported
values are:
• ’sampling’: sample from the data,
• a K x D matrix of points
model.selection
whether to perform model selection (default: FALSE). If set to TRUE, GMM
components with no support in the data are deleted.
model.selection.threshold
value below which we consider a GMM component has no support, set to 1/K if
not explicitly given
rotation.only if set to TRUE, no translation is performed (default: FALSE)
Value
a list of
• Y: list of transformed point sets as N x d matrices,
• R: list of d x d rotation matrices, one for each point set in V,
• t: list of translation vectors, one for each point set in V,
• M: centres of the GMM,
• S: variances of the GMM.
• a: list of posterior probabilities as N x K matrices
• iter: number of iterations
• conv: error value used to evaluate convergence relative to tol
• z: support scores of the GMM components
Examples
X <- read.csv(system.file("test_data", "parasaurolophusA.txt", package="LOMAR",
mustWork = TRUE), sep = "\t")
Y <- read.csv(system.file("test_data", "parasaurolophusB.txt", package="LOMAR",
mustWork = TRUE), sep = "\t")
Z <- read.csv(system.file("test_data", "parasaurolophusC.txt", package="LOMAR",
mustWork = TRUE), sep = "\t")
PS <- list(X, Y, Z)
C <- list()
for(i in 1:3) {
cv <- diag(0.1, ncol(PS[[i]])) + jitter(0.01, amount = 0.01)
cv <- replicate(nrow(PS[[i]]), cv)
C[[i]] <- cv
}
transformation <- jrmpc(PS, C = C, K = 100, maxIter = 20, tol = 0.01,
model.selection = TRUE)
## Not run:
# Visualize registration outcome
library(rgl)
colours <- c("blue", "green", "magenta")
Yt <- transformation[['Y']]
plot3d(Yt[[1]], col = colours[1])
for(i in 2:length(Yt)) {
points3d(Yt[[i]], col = colours[i])
}
# Visualize GMM centres highlighting those with high variance
GMM <- as.data.frame(cbind(transformation[['M']], transformation[['S']]))
colnames(GMM) <- c("x", "y", "z", "S")
colours <- rep("blue", nrow(GMM))
# Find high variance components
threshold <- quantile(transformation[['S']], 0.75)
high.var.idx <- which(transformation[['S']]>threshold)
colours[high.var.idx] <- "red"
plot3d(GMM[, c("x", "y", "z")], col = colours, type = 's', size = 2, box = FALSE, xlab = '',
ylab = '', zlab = '', xlim = c(-0.15,0.15), ylim = c(-0.15, 0.15),
zlim = c(-0.15, 0.15))
## End(Not run)
local_densities local_densities
Description
Compute local point density at each point of a point set
Usage
local_densities(X, k = NULL)
Arguments
X point set, a N x D matrix
k (optional) number of nearest neighbors used (defaults to all points).
Details
Local density is computed as in <NAME>, <NAME>, <NAME>, <NAME> (2018) An efficient outlier removal
method for scattered point cloud data. PLOS ONE 13(8):e0201280. https://doi.org/10.1371/journal.pone.0201280
Value
vector of density value for each point
locprec2cov locprec2cov
Description
Converts localization precision columns to a list of arrays of covariance matrices
Usage
locprec2cov(point.sets, scale = FALSE)
Arguments
point.sets a list of n point sets with locprec columns (locprecz column required for 3D
data)
scale logical, whether to scale the localization precision by the variance of the coor-
dinates
Value
a list of 2x2xn or 3x3xn arrays.
locs2ps locs2ps
Description
Cluster localizations into point sets using DBSCAN
Usage
locs2ps(
points,
eps,
minPts,
keep.locprec = TRUE,
keep.channel = TRUE,
cluster.2d = FALSE
)
Arguments
points a point set as a data frame of coordinates with columns x,y,z.
eps DBSCAN parameter, size of the epsilon neighbourhood
minPts DBSCAN parameter, number of minimum points in the eps region
keep.locprec logical (default: TRUE), whether to preserve the localization precision columns
keep.channel logical (default: TRUE), whether to preserve channel information column
cluster.2d logical (default: FALSE), whether to cluster only using x,y (and ignore z)
Value
a list of matrices with columns x,y,z and eventually locprec[z] and names set to the cluster indices.
locs_from_csv locs_from_csv
Description
Reads and filters single molecule localization events from a csv file as typically output by the SMAP
software. The main columns of interest are the coordinates (x, y, z), point set membership (site) and
localization precision (locprec and locprecz).
Usage
locs_from_csv(
file = NULL,
roi = NULL,
channels = NULL,
frame.filter = NULL,
llrel.filter = NULL,
locprec.filter = 0,
locprecz.filter = 0
)
Arguments
file a csv file with columns x[nm], y[nm], z[nm] and optionally site[numbers], chan-
nel, locprec[nm] and locprecz[nm], other columns are ignored.
roi region of interest, keep points within the specified volume. Must be a data frame
with columns x,y,z and rows min and max defining a bounding box.
channels vector of integers indicating which channel(s) of a multicolour experiment to
get data from.
frame.filter vector of min and max values, filter out points from frames outside the specified
range.
llrel.filter vector of min and max values, filter out points on log-likelihood (for fitted data).
locprec.filter filter out points with locprec value greater than the specified number. Points
with locprec == 0 are also removed.
locprecz.filter
filter out points with locprecz value greater than the specified number. Points
with locprecz == 0 are also removed.
Value
a data frame with columns x,y,z, optionally site, locprec and locprecz.
Examples
data.file <- system.file("test_data", "simulated_NUP107_data.csv", package = "LOMAR",
mustWork = TRUE)
locs <- locs_from_csv(file = data.file, locprec.filter = 20)
points2img points2img
Description
Convert a data frame of point coordinates into an image. Expected photon count at each voxel is
computed as in: <NAME>, <NAME>, <NAME>, and <NAME>, “Simultaneous multiple-
emitter fitting for single molecule super-resolution imaging,” Biomed. Opt. Express 2(5), 1377–1393
(2011).
Usage
points2img(points, voxel.size, method, channels = NULL, ncpu = 1)
Arguments
points a point set as a data frame of coordinates with columns x,y,z.
voxel.size a numeric vector of length 3 indicating the size of the voxel along x,y and z in
the same unit as the coordinates (e.g. nm)
method how to calculate voxel values. Available methods are:
• ’histogram’: value is the number of points (i.e. emitters) in the voxel
• ’photon’: value is the expected number of photons from the points in the
voxel. Input data frame must have columns locprec, locprecz and phot[on].
channels vector of channels to consider, must be values present in the input data frame
channel column
ncpu number of threads to use to speed up computation (default: 1)
Value
an array of dimensions x,y,z and channels if applicable
Examples
point.set <- data.frame(x = c(-9.8,-5.2,12.5,2.5,4.5,1.3,-0.2,0.4,9.3,-1.4,0.5,-1.1,-7.7),
y = c(-4.2,1.5,-0.5,12,-3,-7.2,10.9,6.7,-1.3,10,6.7,-6.2,2.9),
z = c(3.4,-3.8,-1.4,1.8,3.5,2.5,2.6,-4.8,-3.8,3.9,4.1,-3.6,-4))
img <- points2img(point.set, voxel.size = c(2,2,2), method = 'histogram')
points_from_roi points_from_roi
Description
Extract points within given bounding box. Points are translated so that (0,0,0) correspond to the
bounding box corner defined by roi[’min’,c(’x’,’y’,’z’)]
Usage
points_from_roi(points, roi)
Arguments
points a point set as a data frame of coordinates with columns x,y,z.
roi a data frame with columns x,y,z and rows min and max defining a bounding box
Value
a data frame with same columns as input
point_sets_from_locs point_sets_from_locs
Description
Extracts list of point sets from a data frame of single molecule localization coordinates. By default,
uses point set membership indicated in the site column.
Usage
point_sets_from_locs(
locs = NULL,
channels = NULL,
min.cardinality = NULL,
max.cardinality = NULL,
crop.size = NULL,
keep.locprec = TRUE,
sample.size = NULL,
ignore.site = FALSE,
cluster.points = FALSE,
eps = NULL,
minPts = NULL
)
Arguments
locs, a data frame with columns x[nm], y[nm], z[nm] and optionally site[numbers],
locprec[nm] and locprecz[nm], other columns are ignored.
channels vector of integers indicating which channel(s) of a multicolour experiment to
extract point sets from.
min.cardinality
filter out point sets with less than the specified number of points.
max.cardinality
filter out point sets with more than the specified number of points.
crop.size remove points from a set if they are further away than the specified distance
from the center of the set.
keep.locprec logical (default:TRUE). Whether to keep locprec information for each point.
sample.size returns this number of randomly selected point sets. Selects the point sets after
applying eventual filtering.
ignore.site logical (default: FALSE), set to TRUE if point set membership is not present or
needed.
cluster.points logical (default: FALSE), whether to cluster the points using DBSCAN (only if
ignore.site is also TRUE).
eps DBSCAN parameter, size of the epsilon neighbourhood
minPts DBSCAN parameter, number of minimum points in the eps region
Value
a list of matrices with columns x,y,z, optionally locprec and name set to the value of the site column
(if applicable).
Examples
data.file <- system.file("test_data", "simulated_NUP107_data.csv", package = "LOMAR",
mustWork = TRUE)
locs <- locs_from_csv(file = data.file, locprec.filter = 20)
point.sets <- point_sets_from_locs(locs, keep.locprec = TRUE, min.cardinality = 15)
point_sets_from_tiffs point_sets_from_tiffs
Description
Read in single molecule localization events from a series of 3D images in TIFF files where each
image file represents a point set.
Usage
point_sets_from_tiffs(
image_dir = NULL,
pattern = NULL,
image.size = NULL,
sample.size = NULL,
sample.first = FALSE,
min.cardinality = NULL,
max.cardinality = NULL,
crop.size = NULL
)
Arguments
image_dir path to a directory containing the TIFF files.
pattern regular expression, select images whose file path matches the given pattern.
image.size vector of length 3 containing the size of the images along each dimension, e.g.
c(40,40,40).
sample.size if set, selects this number of images at random. A sample size larger than the
available number of samples produces a warning and is ignored.
sample.first if TRUE, samples are selected before applying any eventual filtering. This is
more efficient as it avoids reading all data files.
min.cardinality
if set, filter out all point sets with less than the specified number of points.
max.cardinality
if set, filter out all point sets with more than the specified number of points.
crop.size vector of length 3 containing the desired reduced size of the images along each
dimension, e.g. c(30,30,30).
Value
a list with two elements:
• point.sets: a list of point sets as matrices with columns x,y,z and
• file.names: a vector of paths to the TIFF files from which the point sets were extracted.
Examples
data.dir <- system.file("test_data/img", package = "LOMAR", mustWork = TRUE)
point_sets <- point_sets_from_tiffs(image_dir = data.dir, pattern = "\\.tiff?$",
image.size = c(64, 64, 4), min.cardinality = 10)
ps2ary ps2ary
Description
Convert a list of 3d point sets to a 4d array. Also works for 2d point sets to 3d array conversion.
Usage
ps2ary(point.sets, dims)
Arguments
point.sets a list of point sets.
dims vector of dimensions of the axes (x,y in 2d, x,y,z in 3d).
Value
a 3d or 4d array.
pssk pssk
Description
Compute the persistence scale-space kernel on persistence diagrams. Reference: <NAME>,
<NAME>, <NAME>, and <NAME>. A stable multi-scale kernel for topological ma-
chine learning. In Proceedings of the IEEE conference on computer vision and pattern recognition
(CVPR), pages 4741–4748, 2015.
Usage
pssk(Dg1 = NULL, Dg2 = NULL, sigma = NULL, dimensions = NULL)
Arguments
Dg1 a persistence diagram as a n1 x 3 matrix where each row is a topological feature
and the columns are dimension, birth and death of the feature.
Dg2 another persistence diagram as a n2 x 3 matrix
sigma kernel bandwidth
dimensions vector of the dimensions of the topological features to consider, if NULL (de-
fault) use all available dimensions
Value
kernel value
Examples
D1 <- matrix(c(0,0,0,1,1,0,0,0,1.5, 3.5,2,2.5,3, 4, 6), ncol = 3, byrow = FALSE)
D2 <- matrix(c(0,0,1,1,0, 0, 1.2, 2, 1.4, 3.2,4.6,6.5), ncol = 3, byrow = FALSE)
K <- pssk(Dg1 = D1, Dg2 = D2, sigma = 1)
q2dr Get derivative of 3D rotation matrix from quaternion
Description
Get derivative of 3D rotation matrix from quaternion
Usage
q2dr(q)
Arguments
q quaternion
Value
derivative of rotation matrix
q2r Convert quaternion to rotation matrix
http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
Description
Convert quaternion to rotation matrix http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
Usage
q2r(q)
Arguments
q quaternion
Value
rotation matrix
restore_coordinates restore_coordinates
Description
Restore coordinates from mean 0 and standard deviation 1 to their original distribution
Usage
restore_coordinates(X, mu, sigma)
Arguments
X standardized point set as N x D matrix
mu 1 x D vector of means
sigma standard deviation
Value
N X D matrix of unstandardized coordinates
rotx rotx
Description
Create a rotation matrix representing a rotation of theta radians about the x-axis
Usage
rotx(theta)
Arguments
theta angle in radians
Value
a 3x3 rotation matrix
roty roty
Description
Create a rotation matrix representing a rotation of theta radians about the y-axis
Usage
roty(theta)
Arguments
theta angle in radians
Value
a 3x3 rotation matrix
rotz rotz
Description
Create a rotation matrix representing a rotation of theta radians about the z-axis
Usage
rotz(theta)
Arguments
theta angle in radians
Value
a 3x3 rotation matrix
sliced_Wd sliced_Wd
Description
Compute sliced Wasserstein distance or kernel. Reference: <NAME>, <NAME>, and
<NAME>. Sliced Wasserstein kernel for persistence diagrams. In Proceedings of the 34th In-
ternational Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Re-
search, pages 664–673, 2017.
Usage
sliced_Wd(Dg1, Dg2, M = 10, sigma = 1, dimensions = NULL, return.dist = FALSE)
Arguments
Dg1 a persistence diagram as a n1 x 3 matrix where each row is a topological feature
and the columns are dimension, birth and death of the feature.
Dg2 another persistence diagram as a n2 x 3 matrix
M number of slices (default: 10)
sigma kernel bandwidth (default: 1)
dimensions vector of the dimensions of the topological features to consider, if NULL (de-
fault) use all available dimensions
return.dist logical (default: FALSE). Whether to return the kernel or distance value.
Value
kernel or distance value
Examples
D1 <- matrix(c(0,0,0,1,1,0,0,0,1.5, 3.5,2,2.5,3, 4, 6), ncol = 3, byrow = FALSE)
D2 <- matrix(c(0,0,1,1,0, 0, 1.2, 2, 1.4, 3.2,4.6,6.5), ncol = 3, byrow = FALSE)
K <- sliced_Wd(Dg1 = D1, Dg2 = D2, M = 10, sigma = 1, return.dist = TRUE)
standardize_coordinates
standardize_coordinates
Description
Transform coordinates to have mean 0 and standard deviation 1
Usage
standardize_coordinates(X)
Arguments
X point set as N x D matrix
Value
a list of X: standardized matrix, mu: vector of means, sigma: standard deviation
tr tr
Description
Compute the trace of a matrix
Usage
tr(x)
Arguments
x matrix
Value
trace of the matrix
wgmmreg wgmmreg
Description
Rigid registration of two point sets by minimizing the Wasserstein distance between GMMs
Usage
wgmmreg(
X,
Y,
CX,
CY,
wx = NULL,
wy = NULL,
maxIter = 200,
subsample = NULL,
tol = 1e-08
)
Arguments
X reference point set, a N x D matrix
Y point set to transform, a M x D matrix,
CX array of covariance matrices for each point in X
CY array of covariance matrices for each point in Y
wx (optional) vector of mixture weights for X.
wy (optional) vector of mixture weights for Y.
maxIter maximum number of iterations to perform (default: 200)
subsample if set, use this randomly selected fraction of the points
tol tolerance for determining convergence (default: 1e-8)
Value
a list of
• Y: transformed point set,
• R: rotation matrix,
• t: translation vector,
• c: final value of the cost function,
• converged: logical, whether the algorithm converged.
Examples
data.file1 <- system.file("test_data", "parasaurolophusA.txt", package = "LOMAR",
mustWork = TRUE)
PS1 <- read.csv(data.file1, sep = '\t', header = FALSE)
data.file2 <- system.file("test_data", "parasaurolophusB.txt", package = "LOMAR",
mustWork = TRUE)
C1 <- diag(0.1, ncol(PS1)) + jitter(0.01, amount = 0.01)
C1 <- replicate(nrow(PS1),C1)
PS2 <- read.csv(data.file2, sep = '\t', header = FALSE)
C2 <- diag(0.1, ncol(PS2)) + jitter(0.01, amount = 0.01)
C2 <- replicate(nrow(PS2),C2)
transformation <- wgmmreg(PS1, PS2, C1, C2, subsample = 0.1, maxIter = 30, tol = 1e-4)
## Not run:
# Visualize registration outcome
library(rgl)
plot3d(PS1, col = "blue")
points3d(PS2, col = "green")
points3d(transformation[['Y']], col = "magenta")
## End(Not run) |
useful | hex | Erlang | API Reference
===
[Modules](#modules)
---
[Useful](Useful.html)
A Library of [`Useful`](Useful.html#content) functions for building `Elixir` Apps.
Useful
===
A Library of [`Useful`](Useful.html#content) functions for building `Elixir` Apps.
[Summary](#summary)
===
[Functions](#functions)
---
[atomize\_map\_keys(value)](#atomize_map_keys/1)
[`atomize_map_keys/1`](#atomize_map_keys/1) converts a [`Map`](https://hexdocs.pm/elixir/Map.html) with different keys to a map with just atom keys. Works recursively for nested maps.
Inspired by stackoverflow.com/questions/31990134
[empty\_dir\_contents(dir)](#empty_dir_contents/1)
[`empty_dir_contents/1`](#empty_dir_contents/1) delete all files including any directories recursively in a dir but not the dir itself.
[flatten\_map(map)](#flatten_map/1)
[`flatten_map/1`](#flatten_map/1) flattens a [`Map`](https://hexdocs.pm/elixir/Map.html) of any depth/nesting for easier processing.
Deeply nested maps are denoted by "**" (double underscore) e.g:
`%{name: Alex, detail: %{age: 17}}` becomes `%{name: Alex, detail**age: 17}`
this makes it easy to see what the data structure was before flattening.
Map keys are converted to Atom for simpler access and consistency.
Inspired by: <https://stackoverflow.com/questions/39401947/flatten-nested-map[get\_in\_default(map, keys, default \\ nil)](#get_in_default/3)
[`get_in_default/3`](#get_in_default/3) Proxies [`Kernel.get_in/2`](https://hexdocs.pm/elixir/Kernel.html#get_in/2)
but allows setting a `default` value as the 3rd argument.
[list\_tuples\_to\_unique\_keys(parts)](#list_tuples_to_unique_keys/1)
[`list_tuples_to_unique_keys/1`](#list_tuples_to_unique_keys/1) turns a list of tuples with the same key into a list of tuples with unique keys.
Useful when dealing with "multipart" forms that upload multiple files.
[remove\_item\_from\_list(list, item)](#remove_item_from_list/2)
[`remove_item_from_list/2`](#remove_item_from_list/2) removes a given `item` from a `list` in any position.
[stringify\_map(map)](#stringify_map/1)
`stringy_map/1` converts a [`Map`](https://hexdocs.pm/elixir/Map.html) of any depth/nesting into a string.
Deeply nested maps are denoted by "\_\_" (double underscore). See flatten\_map/1 for more details.
Alphabetizes the keys for consistency. See: github.com/dwyl/useful/issues/56
[stringify\_tuple(arg)](#stringify_tuple/1)
[`stringify_tuple/1`](#stringify_tuple/1) stringifies a [`Tuple`](https://hexdocs.pm/elixir/Tuple.html) with arbitrary values.
Handy when you want to print out a tuple during debugging.
[typeof(x)](#typeof/1)
[`typeof/1`](#typeof/1) returns the type of a vairable.
Inspired by stackoverflow.com/questions/28377135/check-typeof-variable-in-elixir
[Functions](#functions)
=== |
github.com/peterhellberg/xip.name | go | Go | README
[¶](#section-readme)
---
### xip.name
> The service was shut down on 2021-05-17 after Google marked the entire domain as “Social engineering content” since
> (working as intended) you could link to for example <http://1.1.1.1.xip.name/> and getting redirected to <http://1.1.1.1/>
[](https://travis-ci.org/peterhellberg/xip.name)
[](https://godoc.org/github.com/peterhellberg/xip.name)
[](https://github.com/peterhellberg/xip.name/raw/master/LICENSE)
A simple wildcard DNS inspired by xip.io (which seems to have been shut down now)
```
10.0.0.1.xip.name resolves to 10.0.0.1
www.10.0.0.2.xip.name resolves to 10.0.0.2
foo.10.0.0.3.xip.name resolves to 10.0.0.3 bar.baz.10.0.0.4.xip.name resolves to 10.0.0.4
```
#### How does it work?
xip.name runs a custom Domain Name Server which extracts any IP address found in the requested domain name and sends it back in the response.
#### Credits
xip.name is built on top of [Miek](http://miek.nl)’s lovely [dns package](https://github.com/miekg/dns) for Go.
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
xip.name is a small name server which sends back any IP address found in the provided hostname.
When queried for type A, it sends back the parsed IPv4 address.
In the additional section the client host:port and transport are shown.
Basic use pattern:
```
dig @xip.name foo.10.0.0.82.xip.name A
; <<>> DiG 9.8.3-P1 <<>> @xip.name foo.10.0.0.82.xip.name A
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13574
;; flags: qr rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;foo.10.0.0.82.xip.name. IN A
;; ANSWER SECTION:
foo.10.0.0.82.xip.name. 0 IN A 10.0.0.82
;; ADDITIONAL SECTION:
xip.name. 0 IN TXT "Client: 188.126.74.76:52575 (udp)"
;; Query time: 27 msec
;; SERVER: 188.166.43.179#53(188.166.43.179)
;; WHEN: Wed Dec 31 02:55:51 2014
;; MSG SIZE rcvd: 128
```
Initially based on the reflect example found at <https://github.com/miekg/exdns> |
github.com/fraunhofer-aisec/cpg | go | Go | README
[¶](#section-readme)
---
### Code Property Graph
[](https://github.com/Fraunhofer-AISEC/cpg/actions)
[](https://sonarcloud.io/dashboard?id=Fraunhofer-AISEC_cpg) [](https://sonarcloud.io/dashboard?id=Fraunhofer-AISEC_cpg) [](https://sonarcloud.io/dashboard?id=Fraunhofer-AISEC_cpg) [](https://jitpack.io/#Fraunhofer-AISEC/cpg)
A simple library to extract a *code property graph* out of source code. It has support for multiple passes that can extend the analysis after the graph is constructed. It currently supports C/C++ (C17), Java (Java 13) and has experimental support for Golang, Python and TypeScript. Furthermore, it has support for the [LLVM IR](http://llvm.org/docs/LangRef.html) and thus, theoretically support for all languages that compile using LLVM.
#### What is this?
A code property graph (CPG) is a representation of source code in form of a labelled directed multi-graph. Think of it as directed a graph where each node and edge is assigned a (possibly empty) set of key-value pairs (*properties*). This representation is supported by a range of graph databases such as Neptune, Cosmos, Neo4j, Titan, and Apache Tinkergraph and can be used to store source code of a program in a searchable data structure. Thus, the code property graph allows to use existing graph query languages such as Cypher, NQL, SQL, or Gremlin in order to either manually navigate through interesting parts of the source code or to automatically find "interesting" patterns.
This library uses [Eclipse CDT](https://www.eclipse.org/cdt/) for parsing C/C++ source code [JavaParser](https://javaparser.org/) for parsing Java. In contrast to compiler AST generators, both are "forgiving" parsers that can cope with incomplete or even semantically incorrect source code. That makes it possible to analyze source code even without being able to compile it (due to missing dependencies or minor syntax errors). Furthermore, it uses [LLVM](https://llvm.org) through the [javacpp](https://github.com/bytedeco/javacpp) project to parse LLVM IR. Note that the LLVM IR parser is *not* forgiving, i.e., the LLVM IR code needs to be at least considered valid by LLVM. The necessary native libraries are shipped by the javacpp project for most platforms.
#### Specifications
In order to improve some formal aspects of our library, we created several specifications of our core concepts. Currently, the following specifications exist:
* [Dataflow Graph](https://fraunhofer-aisec.github.io/cpg/CPG/specs/dfg/)
* [Evaluation Order Graph](https://fraunhofer-aisec.github.io/cpg/CPG/specs/eog/)
* [Graph Model in neo4j](https://fraunhofer-aisec.github.io/cpg/CPG/specs/graph/)
* [Language and Language Frontend](https://fraunhofer-aisec.github.io/cpg/CPG/impl/language/)
We aim to provide more specifications over time.
#### Usage
To build the project from source, you have to generate a `gradle.properties` file locally.
This file also enables and disables the supported programming languages.
We provide a sample file [here](https://github.com/fraunhofer-aisec/cpg/blob/v7.1.2/gradle.properties.example) - simply copy it to `gradle.properties` in the directory of the cpg-project.
Instead of manually generating or editing the `gradle.properties` file, you can also use the `configure_frontends.sh` script, which edits the properties setting the supported programming languages for you.
##### For Visualization Purposes
In order to get familiar with the graph itself, you can use the subproject [cpg-neo4j](https://github.com/Fraunhofer-AISEC/cpg/tree/master/cpg-neo4j). It uses this library to generate the CPG for a set of user-provided code files. The graph is then persisted to a [Neo4j](https://neo4j.com/) graph database. The advantage this has for the user, is that Neo4j's visualization software [Neo4j Browser](https://neo4j.com/developer/neo4j-browser/) can be used to graphically look at the CPG nodes and edges, instead of their Java representations.
##### As Library
The most recent version is being published to Maven central and can be used as a simple dependency, either using Maven or Gradle. Since Eclipse CDT is not published on maven central, it is necessary to add a repository with a custom layout to find the released CDT files. For example, using Gradle's Kotlin syntax:
```
repositories {
ivy {
setUrl("https://download.eclipse.org/tools/cdt/releases/11.0/cdt-11.0.0/plugins")
metadataSources {
artifact()
}
patternLayout {
artifact("/[organisation].[module]_[revision].[ext]")
}
}
}
dependencies {
var cpgVersion = "5.1.0"
// if you want to include all published cpg modules
implementation("de.fraunhofer.aisec", "cpg", cpgVersion)
// if you only want to use some of the cpg modules
// use the 'cpg-core' module
// and then add the needed extra modules, such as Go and Python
implementation("de.fraunhofer.aisec", "cpg-core", cpgVersion)
implementation("de.fraunhofer.aisec", "cpg-language-go", cpgVersion)
implementation("de.fraunhofer.aisec", "cpg-language-python", cpgVersion)
}
```
Beware, that the `cpg` module includes all optional features and might potentially be HUGE (especially because of the LLVM support). If you do not need LLVM, we suggest just using the `cpg-core` module with the needed extra modules like `cpg-language-go`. In the future we are working on extracting more optional modules into separate modules.
###### Development Builds
A published artifact of every commit can be requested through [JitPack](https://jitpack.io/#Fraunhofer-AISEC/cpg). This is especially useful, if your external project makes use of a specific feature that is not yet merged in yet or not published as a version yet. Please follow the instructions on the JitPack page. Please be aware, that similar to release builds, the CDT repository needs to be added as well (see above).
##### On Command Line
The library can be used on the command line using the `cpg-console` subproject. Please refer to the [README.md](https://github.com/fraunhofer-aisec/cpg/blob/v7.1.2/cpg-console/README.md) of the `cpg-console` as well as our small [tutorial](https://github.com/fraunhofer-aisec/cpg/blob/v7.1.2/tutorial.md) for further details.
##### Configuration
The behavior of the library can be configured in several ways. Most of this is done through the `TranslationConfiguration`
and the `InferenceConfiguration`.
###### TranslationConfiguration
The `TranslationConfiguration` configures various aspects of the translation. E.g., it determines which languages/language frontends and passes will be used, which information should be inferred, which files will be included, among others. The configuration is set through a builder pattern.
###### InferenceConfiguration
The class `InferenceConfiguration` can be used to affect the behavior or the passes if they identify missing nodes.
Currently, there are three flags which can be enabled:
* `guessCastExpression` enables guessing if a CPP expression is a cast or a call expression if it is not clear.
* `inferRecords` enables the inference of missing record declarations (i.e., classes and structs)
* `inferDfgForUnresolvedSymbols` adds DFG edges to method calls represent all potential data flows if the called function is not present in the source code under analysis.
Only `inferDfgForUnresolvedSymbols` is turned on by default.
The configuration can be made through a builder pattern and is set in the `TranslationConfiguration` as follows:
```
val inferenceConfig = InferenceConfiguration
.builder()
.guessCastExpression(true)
.inferRecords(true)
.inferDfgForUnresolvedSymbols(true)
.build()
val translationConfig = TranslationConfiguration
.builder()
.inferenceConfiguration(inferenceConfig)
.build()
```
#### Development Setup
##### Experimental Languages
Some languages, such as Golang are experimental and depend on other native libraries. Therefore, they are not included as gradle submodules by default.
To include them as submodules simply toggle them on in your local `gradle.properties` file by setting the value of the properties to `true` e.g., (`enableGoFrontend=true`).
We provide a sample file [here](https://github.com/fraunhofer-aisec/cpg/blob/v7.1.2/gradle.properties.example).
Instead of manually editing the `gradle.properties` file, you can also use the `configure_frontends.sh` script, which edits the properties for you.
###### Golang
In the case of Golang, the necessary native code can be found in the `src/main/golang` folder of the `cpg-language-go` submodule. Gradle should automatically find JNI headers and stores the finished library in the `src/main/golang` folder. This currently only works for Linux and macOS. In order to use it in an external project, the resulting library needs to be placed somewhere in `java.library.path`.
###### Python
You need to install [jep](https://github.com/ninia/jep/). This can either be system-wide or in a virtual environment. Your jep version has to match the version used by the CPG (see [version catalog](https://github.com/fraunhofer-aisec/cpg/blob/v7.1.2/gradle/libs.versions.toml)).
Currently, only Python 3.{9,10,11,12} is supported.
System Wide Follow the instructions at <https://github.com/ninia/jep/wiki/Getting-Started#installing-jep>.
Virtual Env
* `python3 -m venv ~/.virtualenvs/cpg`
* `source ~/.virtualenvs/cpg/bin/activate`
* `pip3 install jep`
Through the `JepSingleton`, the CPG library will look for well known paths on Linux and OS X. `JepSingleton` will prefer a virtualenv with the name `cpg`, this can be adjusted with the environment variable `CPG_PYTHON_VIRTUALENV`.
###### TypeScript
For parsing TypeScript, the necessary NodeJS-based code can be found in the `src/main/nodejs` directory of the `cpg-language-typescript` submodule. Gradle should build the script automatically, provided NodeJS (>=16) is installed. The bundles script will be placed inside the jar's resources and should work out of the box.
##### Code Style
We use [Google Java Style](https://github.com/google/google-java-format) as a formatting. Please install the appropriate plugin for your IDE, such as the [google-java-format IntelliJ plugin](https://plugins.jetbrains.com/plugin/8527-google-java-format) or [google-java-format Eclipse plugin](https://github.com/google/google-java-format/releases/download/google-java-format-1.6/google-java-format-eclipse-plugin_1.6.0.jar).
##### Integration into IntelliJ
Straightforward, however three things are recommended
* Enable gradle "auto-import"
* Enable google-java-format
* Hook gradle spotlessApply into "before build" (might be obsolete with IDEA 2019.1)
##### Git Hooks
You can use the hook in `style/pre-commit` to check for formatting errors:
```
cp style/pre-commit .git/hooks
```
#### Contributors
The following authors have contributed to this project (in alphabetical order):
* [fwendland](https://github.com/fwendland)
* [JulianSchuette](https://github.com/JulianSchuette)
* [konradweiss](https://github.com/konradweiss)
* [KuechA](https://github.com/KuechA)
* [Masrepus](https://github.com/Masrepus)
* [maximiliankaul](https://github.com/maximiliankaul)
* [maximilian-galanis](https://github.com/maximilian-galanis)
* [obraunsdorf](https://github.com/obraunsdorf)
* [oxisto](https://github.com/oxisto)
* [peckto](https://github.com/peckto)
* [titze](https://github.com/titze)
* [vfsrfs](https://github.com/vfsrfs)
#### Contributing
We are currently discussing the implementation of a Contributor License Agreement (CLA). Unfortunately,
we cannot merge external pull requests until this issue is resolved.
#### Further reading
A quick write-up of our CPG has been published on arXiv:
[1] <NAME>, <NAME>. A Language-Independent Analysis Platform for Source Code. <https://arxiv.org/abs/2203.08424A preliminary version of this cpg has been used to analyze ARM binaries of iOS apps:
[2] <NAME>, <NAME>. *liOS: Lifting iOS Apps for Fun and Profit.* Proceedings of the ESORICS International Workshop on Secure Internet of Things (SIoT), Luxembourg, 2019. <https://arxiv.org/abs/2003.12901An initial publication on the concept of using code property graphs for static analysis:
[3] Yamaguchi et al. - Modeling and Discovering Vulnerabilities with Code Property Graphs. <https://www.sec.cs.tu-bs.de/pubs/2014-ieeesp.pdf[4] is an unrelated, yet similar project by the authors of the above publication, that is used by the open source software Joern [5] for analysing C/C++ code. While [4] is a specification and implementation of the data structure, this project here includes various *Language frontends* (currently C/C++ and Java, Python to com) and allows creating custom graphs by configuring *Passes* which extend the graph as necessary for a specific analysis:
[4] <https://github.com/ShiftLeftSecurity/codepropertygraph[5] <https://github.com/ShiftLeftSecurity/joern/Additional extensions of the CPG into the field of Cloud security:
[6] <NAME>, <NAME>, <NAME> and <NAME>. Cloud Property Graph: Connecting Cloud Security Assessments with Static Code Analysis. IEEE CLOUD 2021. <https://doi.org/10.1109/CLOUD53861.2021.00014None |
fitbenchmarking | readthedoc | Python | FitBenchmarking 0.1.dev1 documentation
[FitBenchmarking](#)
---
Welcome to FitBenchmarking’s documentation
===
FitBenchmarking[](#fitbenchmarking)
---
FitBenchmarking is an open source tool for comparing different minimizers/fitting frameworks. FitBenchmarking is cross platform and we support Windows, Linux and Mac OS. For questions, feature requests or any other inquiries, please open an issue on GitHub, or send us an e-mail at [<EMAIL>](mailto:<EMAIL>).
* **Installation Instructions:** <https://fitbenchmarking.readthedocs.io/en/v1.0.0-rc1/users/install_instructions/index.html>
* **User Documentation & Example Usage:** <https://fitbenchmarking.readthedocs.io/en/v1.0.0-rc1/users/index.html>
* **Community Guidelines:** <https://fitbenchmarking.readthedocs.io/en/v1.0.0-rc1/contributors/guidelines.html>
* **Automated Tests:** Run via GitHub Actions, <https://github.com/fitbenchmarking/fitbenchmarking/actions>, and tests are documented at <https://fitbenchmarking.readthedocs.io/en/v1.0.0-rc1/users/tests.htmlThe package is the result of a collaboration between STFC’s Scientific Computing Department and ISIS Neutron and Muon Facility and the Diamond Light Source. We also would like to acknowledge support from:
* EU SINE2020 WP-10, which received funding from the European Union’s Horizon2020 research and innovation programme under grant agreement No 654000.
* EPSRC Grant EP/M025179/1 Least Squares: Fit for the Future.
* The Ada Lovelace Centre (ALC). ALC is an integrated, cross-disciplinary data intensive science centre, for better exploitation of research carried out at our large scale National Facilities including the Diamond Light Source (DLS), the ISIS Neutron and Muon Facility, the Central Laser Facility (CLF) and the Culham Centre for Fusion Energy (CCFE).
### Table Of Contents[](#table-of-contents)
#### FitBenchmarking Concept Documentation[](#fitbenchmarking-concept-documentation)
Here we outline why we built the fitbenchmarking software, and how the software benchmarks minimizers with the goal of highlighting the best tool for different types of data.
##### Why is FitBenchmarking important?[](#why-is-fitbenchmarking-important)
Fitting a mathematical model to data is a fundamental task across all scientific disciplines. (At least) three groups of people have an interest in fitting software:
* **Scientists**, who want to know what is the best algorithm for fitting their model to data they might encounter, on their specific hardware;
* **Scientific software developers**, who want to know what is the state-of-the-art in fitting algorithms and implementations,
what they should recommend as their default solver, and if they should implement a new method in their software; and
* **Mathematicians** and **numerical software developers**, who want to understand the types of problems on which current algorithms do not perform well,
and to have a route to expose newly developed methods to users.
Representatives of each of these communities have got together to build FitBenchmarking.
We hope this tool will help foster fruitful interactions and collaborations across the disciplines.
###### Example workflow[](#example-workflow)
The black crosses on the plot below are data obtained from an experiment at the VESUVIO beamline at ISIS Neutron and Muon source:
VESUVIO experiment data[](#id1)
The scientist needs to interpret this data, and will typically use a data analysis package to help with this. Such packages are written by specialist scientific software developers, who are experts in analysing the kind of data produced by a given experiment;
examples include [Mantid](https://mantidproject.org/),
[SasView](https://www.sasview.org), and [Horace](https://horace.isis.rl.ac.uk).
These packages include mathematical models, which depend on parameters,
that can describe the data.
We need to find values for the parameters in these models which best fit the data – for more background, see this [Wikipedia article](https://en.wikipedia.org/wiki/Goodness_of_fit).
The usual way this is done is by finding parameters that minimize the
(weighted) squares of the error in the data, or \(\chi^2\) value.
This is equivalent to formulating a nonlinear least-squares problem; specifically, given \(n\) data points
\((x_i, y_i)\) (the crosses in the figure above), together with estimates of the errors on the values of \(y_i\), \(\sigma_i\), we solve
\[{\boldsymbol{\beta}}^* = \arg \min_{{\boldsymbol{\beta}}} \underbrace{\sum_i \left( \frac{y_i - f({\boldsymbol{\beta}};x_i)}{\sigma_i} \right)^2}_{\chi^2({\boldsymbol{\beta}})},\label{eq:chi2}\]
where \(f({\boldsymbol{\beta}};x)\) is the model we’re trying to fit, and \(\boldsymbol{\beta}\) are the parameters we’re trying to find.
Usually the scientist will supply a starting guess,
\({\boldsymbol{\beta}}_0\) (the pink curve in the graph above),
which describes where they think the solution might be.
She then has to *choose which algorithm to use to fit the curve*
from the selection available in the analysis software.
Different algorithms may be more or less suited to a problem, depending on factors such as the architecture of the machine, the availability of first and second derivatives, the amount of data, the type of model used, etc.
Below we show the data overlayed by a blue curve, which is a model fitted using the implementation of the Levenberg-Marquardt algorithm from the GNU Scientific Library (`lmsder`).
The algorithm claims to have found a local minimum with a Chi-squared error of 0.4771 in 1.9 seconds.
GSL’s `lmsder` (Levenberg-Marquardt) algorithm on the data[](#id2)
We also solved the nonlinear least squares problem using GSL’s implementation of a Nelder-Mead simplex algorithm (`nmsimplex2`), which again claimed to solve the problem, this time in a faster 1.5 seconds. However, this time the Chi-squared error was 0.8505, and we plot the curve obtained in green below. The previous curve is in dotted-blue, for comparison.
GSL’s `nmsimplex2` (Nelder-Mead Simplex) algorithm on the data[](#id3)
By eye it is clear that the solution given by `lmsder` is better.
As the volume of data increases, and we do more and more data analysis algorithmically, it is increasingly important that we have the best algorithm without needing to check it by eye.
FitBenchmarking will help the **scientist** make an informed choice by comparing runtime and accuracy of all available minimizers, on their specific hardware, on problems from their science area, which will ensure they are using the most appropriate minimizer.
FitBenchmarking will help the **scientific software developer** ensure that the most robust and quickest algorithms for the type of data analysis they support are available in their software.
FitBenchmarking will help **mathematicians** see what the state of the art is, and what kinds of data are problematic. It will give them access to real data, and will give a route for novel methods to quickly make it into production.
A workflow as described above plays a crucial role in the processing and analysis of data at large research facilities in tasks as diverse as instrument calibration, refinement of structures, and data analysis methods specific to different scientific techniques. FitBenchmarking will ensure that, across all areas that utilise least-squares fitting, scientists can be confident they are using the best tool for the job.
We discuss the specific FitBenchmarking paradigm in the Section [How does FitBenchmarking work?](index.html#how)
##### How does FitBenchmarking work?[](#how-does-fitbenchmarking-work)
FitBenchmarking takes data and models from real world applications and data analysis packages.
It fits the data to the models by casting them as a nonlinear least-squares problem.
We fit the data using a range of data fitting and nonlinear optimization software, and present comparisons on the accuracy and timings.
###### The Benchmarking Paradigm[](#the-benchmarking-paradigm)
FitBenchmarking can compare against any of the supported minmizers listed in
[Minimizer Options](index.html#minimizer-option). We’ve also made it straightforward to add new software by following the instructions in [Adding Fitting Software](index.html#controllers) – the software just needs to be callable from Python.
Once you have chosen which minimizers you want to compare for a given problem,
running FitBenchmarking will give you a comparison to indicate the minimizer that performs best.
There are a number of options that you can pick to customize what your tests are comparing, or how they are run. A full list of these options, and how to select them, is given in the section [FitBenchmarking Options](index.html#options).
FitBenchmarking creates tables, as given in the section [FitBenchmarking Output](index.html#output),
which show a comparison between the different minimizers available.
An example of a table is:
This is the result of FitBenchmarking for a selection of software/minimizers and different problem definition types supported in FitBenchmarking.
Both the raw chi squared values, and the values normalised with respect to the best minimizer per problem, are given.
The problem names link to html pages that display plots of the data and the fit that was performed, together with initial and final values of the parameters. Here is an example of the final plot fit:
####### Performance Profile[](#performance-profile)
With each test FitBenchmarking also produces a Dolan-Moré performance profile:
Fits are taken from all benchmarks, so if FitBenchmarking is run with
`n` problems and `m` cost functions, the resulting profile plots will have
`n*m` steps on the y-axis.
The solvers appearing in the top left corner may be considered the best performing on this test set.
See [Dolan and Moré (2001)](https://link.springer.com/article/10.1007/s101070100263)
for more information.
#### FitBenchmarking User Documentation[](#fitbenchmarking-user-documentation)
In these pages we describe how to install, run, and set options to use the FitBenchmarking software.
##### Installation[](#installation)
Fitbenchmarking will install all packages that can be installed through pip.
This includes the minimizers from SciPy, bumps, DFO-GN/LS, Minuit, and also the SASModels package.
To enable Fitbenchmarking with the other supported software,
you must install them with the external software instructions.
###### Installing FitBenchmarking and default fitting packages[](#installing-fitbenchmarking-and-default-fitting-packages)
We recommend using for running/installing Fitbenchmarking.
The easiest way to install FitBenchmarking is by using the Python package manager,
[pip](https://pip.pypa.io/en/stable/).
####### Installing via `pip`[](#installing-via-pip)
FitBenchmarking can be installed via the command line by entering:
```
python -m pip install fitbenchmarking[bumps,DFO,gradient_free,minuit,SAS,numdifftools,lmfit,nlopt]
```
This will install the latest stable version of FitBenchmarking.
For all available versions please visit the FitBenchmarking
[PyPI project](https://pypi.org/project/fitbenchmarking/).
FitBenchmarking can also use additional software that cannot be installed using pip; please see [Installing External Software](index.html#external-instructions) for details.
Note
This install will include additional optional packages –
see [Extra dependencies](#extra-dependencies).
Any of the dependencies in the square brackets can be omitted, if required,
and that package will not be available for Benchmarking, or will use the version of the package already on your system, if appropriate.
####### Installing from source[](#installing-from-source)
You may instead wish to install from source, e.g., to get the very latest version of the code that is still in development.
1. Download this repository or clone it using
[git](https://git-scm.com/):
`git clone https://github.com/fitbenchmarking/fitbenchmarking.git`
2. Open up a terminal (command prompt) and go into the
`fitbenchmarking` directory.
3. Once you are in the right directory, we recommend that you type
```
python -m pip install .[bumps,DFO,gradient_free,minuit,SAS,numdifftools,lmfit,nlopt]
```
4. Additional software that cannot be installed via pip can also be used with FitBenchmarking. Follow the instructions at
[Installing External Software](index.html#external-instructions).
####### Extra dependencies[](#extra-dependencies)
In addition to the external packages described at [Installing External Software](index.html#external-instructions),
some optional dependencies can be installed directly by FitBenchmarking.
These are installed by issuing the commands
```
python -m pip install fitbenchmarking['option-1','option-2',...]
```
or
```
python -m pip install .['option-1','option-2',...]
```
where valid strings `option-x` are:
* `bumps`– installs the [Bumps](https://bumps.readthedocs.io) fitting package.
* `DFO` – installs the [DFO-LS](http://people.maths.ox.ac.uk/robertsl/dfols/userguide.html) and [DFO-GN](http://people.maths.ox.ac.uk/robertsl/dfogn/userguide.html) fitting packages.
* `gofit` – installs the [GOFit](https://github.com/ralna/GOFit) fitting package.
* `gradient_free` – installs the [Gradient-Free-Optimizers](https://github.com/SimonBlanke/Gradient-Free-Optimizers) fitting package
* `levmar` – installs the [levmar](http://users.ics.forth.gr/~lourakis/levmar/) fitting package (suitable for Python up to 3.8, see [Levmar](index.html#levmar-install)). Note that the interface we use also requires BLAS and LAPLACK to be installed on the system, and calls to this minimizer will fail if these libraries are not present.
* `mantid` – installs the [h5py](https://pypi.org/project/h5py/) and [pyyaml](https://pypi.org/project/PyYAML/) modules.
* `matlab` – installs the [dill](https://pypi.org/project/dill/) module required to run matlab controllers in fitbenchmarking
* `minuit` – installs the [Minuit](http://seal.web.cern.ch/seal/snapshot/work-packages/mathlibs/minuit/) fitting package.
* `SAS` – installs the [Sasmodels](https://github.com/SasView/sasmodels) fitting package and the [tinycc](https://pypi.org/project/tinycc/) module.
* `numdifftools` – installs the [numdifftools](https://numdifftools.readthedocs.io/en/latest/index.html) numerical differentiation package.
* `nlopt`– installs the [NLopt](https://github.com/DanielBok/nlopt-python#installation) fitting package.
* `lmfit`– installs the [LMFIT](https://lmfit.github.io/lmfit-py/installation.html) and [emcee](https://emcee.readthedocs.io/en/stable/user/install/) fitting package.
###### Installing External Software[](#installing-external-software)
Fitbenchmarking will install all packages that are available through pip.
To enable Fitbenchmarking with the other supported software,
they need to be installed and available on your machine. We give pointers outlining how to do this below, and you can find install scripts for Ubuntu 18.04 in the directory /build/<software>/
####### Ceres Solver[](#ceres-solver)
Ceres Solver is used as a fitting software in FitBenchmarking, and is called via the PyCeres interface.
Install instructions can be found on the [PyCeres](https://github.com/Edwinem/ceres_python_bindings#recommended-build-alongside-ceres) Github page and
[Ceres Solver documentation](http://ceres-solver.org/installation.html)
Please note that the `PYCERES_LOCATION` environment variable must be set.
####### CUTEst[](#cutest)
CUTEst is used to parse SIF files in FitBenchmarking, and is called via the PyCUTEst interface.
Currently this is only supported for Mac and Linux, and can be installed by following the instructions outlined on the [pycutest documentation](https://jfowkes.github.io/pycutest/_build/html/install.html)
Please note that the `PYCUTEST_CACHE` environment variable must be set, and it must be in the `PYTHONPATH`.
####### GSL[](#gsl)
GSL is used as a fitting software in FitBenchmarking, and is called via the pyGSL interface.
Install instructions can be found at the [pyGSL docs](http://pygsl.sourceforge.net/).
This package is also installable via pip, provided GSL is available on your system;
see our example build script in build/gsl.
Note: pyGSL may not be installable with the latest versions of pip. We have found that 20.0.2 works for our tests.
####### Horace[](#horace)
Horace can be installed by following the instructions [on the Horace website](https://pace-neutrons.github.io/Horace/3.6.0/Download_and_setup.html).
In addition, MATLAB and the MATLAB engine must be installed following the
[instructions given below](#matlab-install).
####### Levmar[](#levmar)
Levmar is available on pip, however the latest release will only work up to Python 3.8.
In order to use Levmar with newer versions of python, you are required to clone the repository and build it locally. This is fast and has been tested on Ubuntu 20.04.
Instructions for linux are below, for other operating systems the process will be the same.
``
git clone [email protected]:bjodah/levmar.git cd levmar pip install .
cd ..
rm -rf levmar
``
####### Mantid[](#mantid)
Mantid is used both as fitting software, and to parse data files.
Instructions on how to install Mantid for a range of systems are available at <https://download.mantidproject.org/>.
####### MATLAB[](#matlab)
MATLAB is available to use as fitting software in FitBenchmarking, and is called via the MATLAB Engine API for Python.
To use this fitting software, both MATLAB and the MATLAB engine must be installed. Installation instructions for MATLAB are available at
<https://uk.mathworks.com/help/install/ug/install-products-with-internet-connection.html>,
and instructions for installing and setting up the MATLAB engine are here: <https://uk.mathworks.com/help/matlab/matlab_external/install-the-matlab-engine-for-python.html>.
Furthermore, Matlab requires additional Python packages to be installed. You can find the instructions on how to install these packages by following the link provided: [here](index.html#extra-dependencies).
####### RALFit[](#ralfit)
RALFit is available to use as fitting software.
Instructions on how to build the python interface are at <https://ralfit.readthedocs.io/projects/Python/en/latest/install.html####### Theseus[](#theseus)
Theseus is used as a fitting software in FitBenchmarking, and is called via theseus-ai python module which requries pytorch
Install instructions can be found on the [Theseus Github page](https://github.com/facebookresearch/theseus#getting-started/)
##### Running FitBenchmarking[](#running-fitbenchmarking)
Once installed, issuing the command
```
fitbenchmarking
```
will run the NIST average difficulty problem set on SciPy minmizers.
###### Running alternative problems[](#running-alternative-problems)
Other problems written in a [supported file format](index.html#problem-def)
can be analyzed with FitBenchmarking by passing the path using the `-p` or `--problem-sets` argument.
Example problems can be downloaded from
[Benchmark problems](index.html#benchmarkproblems), and they can also be found in the
`fitbenchmarking/examples` directory of the code.
For example, to run the NIST low difficulty set from the base directory of the source, type into the terminal:
```
fitbenchmarking -p examples/benchmark_problems/NIST/low_difficulty
```
###### Changing the options[](#changing-the-options)
An options file can also be passed with the `-o` or `--options-file` argument.
For example, the template file can be run by issuing the command
```
fitbenchmarking -o examples/options_template.ini \
-p examples/benchmark_problems/NIST/low_difficulty
```
Details about how the options file must be formatted are given in [FitBenchmarking Options](index.html#options).
###### Changing options via the command line[](#changing-options-via-the-command-line)
It is possible to change the following options via the command line rather than from an `.ini` file or from the default options.
They can be changed using the arguments in the table below.
**For example, to change the results directory:**
The default directory where the results are saved can be changed using the `-r`
or `--results-dir` argument. The [results directory option](index.html#results-directory-option)
can also be changed in the options file.
```
fitbenchmarking -r new_results/
```
The default results directory is `fitbenchmarking_results`.
**Multiple options**
For an option for which you wish to make several choices e.g. `table_type`, simply use a space to separate your choices:
```
fitbenchmarking -t acc runtime
```
If you wish to change several different options, use a space to separate the arguments:
```
fitbenchmarking -t acc -l WARNING
```
**Help**
For more information on changing options via the command line, you can use the `-h`
or `--help` argument.
```
fitbenchmarking -h
```
##### Optimization Algorithms[](#optimization-algorithms)
The different minimizers used in Fitbenchmarking implement various numerical optimization algorithms. These are catagorised by the algorithm type list, with options detailed below, along with advantages/disadvantages of each algorithm.
###### Derivative Free[](#derivative-free)
Derivative Free methods do not compute the gradient of a function and so are often used to minimize problems with nondifferentiable functions. Some derivative free methods will attempt to approximate the gradient using a finite difference approach, another class of methods constructs a linear or quadratic model of the objective functions and uses a trust region approach to find the next iterate. Another widely used derivative free method is the Nelder-Mead simplex method. [[Nocedal]](index.html#nocedal)
To select all minimizers in fitbenchmarking that use derivative free methods, use the algorithm type `deriv_free`.
####### Simplex (`simplex`)[](#simplex-simplex)
Nelder-Mead is a simplex based algorithm, with a simplex \(S\) in \({\rm I\!R}\) being defined as the convex hull of \(n+1\) vertices \(\{x_1, ..., x_{n+1}\}\).
In an iteration of the algorithm, the idea is to remove the vertex with the worst function value. It is then replaced with another point with a better value. An iteration consists of the following steps:
1. **Ordering** the vertices of \(S\) so that \(f(x_1) \leq f(x_2) \leq ... \leq f(x_{n+1})\)
2. Calculating the **centroid**, \(\bar{x}\) of the best \(n\) points \(\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i\)
3. Carry out a **transformation** to compute the new simplex. Try to replace only the worst vertex \(x_{n+1}\) with a better point, which then becomes the new vertex. If this fails, the simplex is shrunk towards the best vertex \(x_1\) and \(n\) new vertices are computed.
The algorithm terminates when the simplex \(S\) is sufficiently small. [[Singer]](index.html#singer)
**Advantages**:* Method only requires 1 or 2 functions evaluations per iteration.
* Gives significant improvements in first few iterations - quick to produce satisfactory results.
**Disadvantages**:* Stagnation can occur at non-optimal points, with large numbers of iterations leading to negligible improvement even when nowhere near a minimum.
* If numerical computation of function derivative can be trusted, then other algorithms are more robust.
[[Singer]](index.html#singer)
[Nocedal](#id1)
<NAME>, <NAME> (2006), Numerical Optimization
Singer([1](#id2),[2](#id3))
<NAME>, <NAME> (2009) Nelder-Mead algorithm. Scholarpedia, 4(7):2928.
###### Line Search Methods[](#line-search-methods)
In line search methods, each iteration is given by \(x_{k+1} = x_k + \alpha_k p_k\), where \(p_k\) is the search direction and \(\alpha_k\) is the step length.
The search direction is often of the form \(p_k = -B_k^{-1} \nabla f_k\) where \(B_k\) is a symmetric and non-singular matrix. The form of \(p_k\) is dependent on algorithm choice.
The ideal step length would be \(min_{\alpha>0} f(x_k + \alpha p_k)\) but this is generally too expensive to calculate. Instead an inexact line search condition such as the Wolfe Conditions can be used:
\[\begin{split}f(x_k + \alpha p_k) \leq f(x_k) + c_1 \alpha \nabla f_k^T p_k \\
f(x_k + \alpha_k p_k)^T p_k \geq c_2 \nabla f_k^T p_k\end{split}\]
With \(0<c_1<c_2<1\). Here, the first condition ensures that \(\alpha_k\) gives a sufficient decrease in \(f\), whilst the second condition rules out unacceptably short steps. [[Nocedal]](index.html#nocedal)
####### Steepest Descent (`steepest_descent`)[](#steepest-descent-steepest-descent)
Simple method where search direction \(p_k\) is set to be \(-\nabla f_k\), i.e. the direction along which \(f\) decreases most rapidly.
**Advantages**:* Low storage requirements
* Easy to compute
**Disadvantages**:* Slow convergence for nonlinear problems
[[Nocedal]](index.html#nocedal)
####### Conjugate Gradient (`conjugate_gradient`)[](#conjugate-gradient-conjugate-gradient)
Conjugate Gradient methods have a faster convergence rate than Steepest Descent but avoid the high computational cost of methods where the inverse Hessian is calculated.
Given an iterate \(x_0\), evaluate \(f_0 = f(x_0), \nabla f_0 = \nabla f(x_0)\).
Set \(p_0 \leftarrow - \nabla f_0, k \leftarrow 0\)
Then while \(\nabla f_k \neq 0\):
Carry out a line search to compute the next iterate, then evaluate \(\nabla f_{k+1}\) and use this to determine the subsequent conjugate direction \(p_{k+1} = - \nabla f(x_{k+1}) + \beta_k p_k\)
Different variations of the Conjugate Gradient algorithm use different formulas for \(\beta_k\), for example:
Fletcher-Reeves: \(\beta_{k+1} = \frac{f_{k+1}^T \nabla f_{k+1}}{\nabla f_k^T \nabla f_k}\)
Polak-Ribiere: \(\beta_{k+1} = \frac{ \nabla f_{k+1}^T ( \nabla f_{k+1} - \nabla f_k)}{\|\nabla f_k\|^2}\)
[[Nocedal]](index.html#nocedal) [[Poczos]](index.html#poczos)
**Advantages**:* Considered to be one of the best general purpose methods.
* Faster convergence rate compared to Steepest Descent and only requires evaluation of objective function and it’s gradient - no matrix operations.
**Disadvantages**:* For Fletcher-Reeves method it can be shown that if the method generates a bad direction and step, then the next direction and step are also likely to be bad. However, this is not the case with the Polak Ribiere method.
* Generally, the Polak Ribiere method is more efficient that the Fletcher-Reeves method but it has the disadvantage is requiring one more vector of storage.
[[Nocedal]](index.html#nocedal)
####### BFGS (`bfgs`)[](#bfgs-bfgs)
Most popular quasi-Newton method, which uses an approximate Hessian rather than the true Hessian which is used in a Newton line search method.
Starting with an initial Hessian approximation \(H_0\) and starting point \(x_0\):
While \(\| \nabla f_k \| > \epsilon\):
Compute the search direction \(p_k = -H_k \nabla f_k\)
Then find next iterate \(x_{k+1}\) by performing a line search.
Next, define \(s_k = x_{k+1}-x_k\) and \(y_k = \nabla f_{k+1} - \nabla f_k\), then compute
\[H_{k+1} = (I - \rho_k s_k y_k^T)H_k(I - \rho_k y_k s_K^T) + \rho_k s_k s_k^T\]
with \(\rho_k = \frac{1}{y_k^T s_k}\)
**Advantages**:* Superlinear rate of convergence
* Has self-correcting properties - if there is a bad estimate for \(H_k\), then it will tend to correct itself within a few iterations.
* No need to compute the Jacobian or Hessian.
**Disadvantages**:* Newton’s method has quadratic convergence but this is lost with BFGS.
[[Nocedal]](index.html#nocedal)
####### Gauss Newton (`gauss_newton`)[](#gauss-newton-gauss-newton)
Modified Newton’s method with line search. Instead of solving standard Newton equations
\[\nabla^2 f(x_k)p = -\nabla f(x_k),\]
solve the system
\[J_k^T J_k p_k^{GN} = - J_k^T r_k\]
(where \(J_k\) is the Jacobian) to obtain the search direction \(p_k^{GN}\). The next iterate is then set as \(x_{k+1} = x_k + p_k^{GN}\).
Here, the approximation of the Hessian \(\nabla^2 f_k \approx J_k^T J_k\) has been made, which helps to save on computation time as second derivatives are not calculated.
**Advantages**:* Calculation of second derivatives is not required.
* If residuals or their second order partial derivatives are small, then \(J_k^T J_k\) is a close approximation to \(\nabla^2 f_k\) and convergence of Gauss-Newton is fast.
* The search direction \(p_J^{GN}\) is always a descent direction as long as \(J_k\) has full rank and the gradient \(\nabla f_k\) is nonzero.
**Disadvantages**:* Without a good initial guess, or if the matrix \(J_k^T J_k\) is ill-conditioned, the Gauss Newton Algorithm is very slow to converge to a solution.
* If relative residuals are large, then large amounts of information will be lost.
* \(J_k\) must be full rank.
[[Nocedal]](index.html#nocedal) [[Floater]](index.html#floater)
Nocedal([1](#id1),[2](#id2),[3](#id3),[4](#id5),[5](#id6),[6](#id7))
<NAME>, <NAME> (2006), Numerical Optimization
[Poczos](#id4)
<NAME>, <NAME> (2012), Lecture 10: Optimization, School of Computer Science, Carnegie Mellon University
[Floater](#id8)
<NAME> (2018), Lecture 13: Non-linear least squares and the Gauss-Newton method, University of Oslo
###### Trust Region[](#trust-region)
Trust region approach involves constructing a model function \(m_k\) that approximates the function \(f\) in the region, \(\Delta\), near the current point \(x_k\).
The model \(m_k\) is often a quadratic obtained by a Taylor Series expansion of the function around \(x_k\).
\[m_k(p) = f_k + \nabla f(x_k)^T p + \frac{1}{2} p^T B_k p\]
where \(B_k\) is an approximation of the Hessian.
The subproblem to be solved at each iteration in order to find the step length is \(\min_p m_k(p)\), subject to \(\|p\| \leq \Delta_k\). [[Nocedal]](index.html#nocedal)
To select all minimizers in fitbenchmarking that use a trust region approach, use the algorithm type `trust_region`.
####### Levenberg-Marquardt (`levenberg-Marquardt`)[](#levenberg-marquardt-levenberg-marquardt)
Most widely used optimization algorithm, which uses the same Hessian approximation as Gauss-Newton but uses a trust region strategy instead of line search. As the Hessian approximation is the same as Gauss-Newton, convergence rate is similar.
For Levenberg-Marquardt, the model function \(m_k\), is chosen to be
\[m_k(p) = \frac{1}{2} \|r_k\|^2 + p^T J_k^T r_k + \frac{1}{2} p^T J_k^T J_k p\]
So, for a spherical trust region, the subproblem to be solved at each iteration is \(\min_p \frac{1}{2} \|J_k p + r_k\|^2\), subject to \(\|p\| \leq \Delta_k\).
Levenberg-Marquardt uses a combination of gradient descent and Gauss-Newton method. When the solution \(p^{GN}\) lies inside of the trust region \(\Delta\), then \(p^{GN}\) also solves the sub-problem. Otherwise, the current iteration is far from the optimal value and so the search direction is determined using steepest descent, which performs better than Gauss-Newton when far from the minimum.
**Advantages**:* Robust (more so than Gauss-Newton).
* Avoids the weakness with Gauss-Newton that Jacobian must be full rank.
* Fast to converge.
* Good initial guess not required.
**Disadvantages**:* Similarly to Gauss-Newton, not good for large residual problems.
* Can be slow to converge if a problem has many parameters
[[Nocedal]](index.html#nocedal) [[Ranganathan]](index.html#ranganathan)
Nocedal([1](#id2),[2](#id3))
<NAME>, <NAME> (2006), Numerical Optimization
[Ranganathan](#id4)
<NAME> (2004), The Levenberg-Marquardt Algorithm, University of California, Santa Barbara
###### Algorithm Types of Available Minimizers[](#algorithm-types-of-available-minimizers)
Least Squares (`ls`):
```
{'bumps': ['lm-bumps', 'scipy-leastsq'],
'ceres': ['Levenberg_Marquardt',
'Dogleg',
'BFGS',
'LBFGS',
'steepest_descent',
'Fletcher_Reeves',
'Polak_Ribiere',
'Hestenes_Stiefel'],
'dfo': ['dfogn', 'dfols'],
'gofit': ['alternating', 'multistart', 'regularisation'],
'gradient_free': [],
'gsl': ['lmsder', 'lmder'],
'horace': ['lm-lsqr'],
'levmar': ['levmar'],
'lmfit': ['least_squares', 'leastsq'],
'mantid': ['Levenberg-Marquardt',
'Levenberg-MarquardtMD',
'Trust Region',
'FABADA'],
'matlab': [],
'matlab_curve': ['Trust-Region', 'Levenberg-Marquardt'],
'matlab_opt': ['levenberg-marquardt', 'trust-region-reflective'],
'matlab_stats': ['Levenberg-Marquardt'],
'minuit': [],
'nlopt': [],
'ralfit': ['gn',
'hybrid',
'newton',
'newton-tensor',
'gn_reg',
'hybrid_reg',
'newton_reg',
'newton-tensor_reg'],
'scipy': [None],
'scipy_go': [None],
'scipy_ls': ['lm-scipy', 'trf', 'dogbox'],
'theseus': ['Levenberg_Marquardt', 'Gauss-Newton']}
```
Deriv-Free (`deriv_free`):
```
{'bumps': ['amoeba', 'de', 'dream'],
'ceres': [],
'dfo': ['dfogn', 'dfols'],
'gofit': [],
'gradient_free': ['HillClimbingOptimizer',
'RepulsingHillClimbingOptimizer',
'SimulatedAnnealingOptimizer',
'RandomSearchOptimizer',
'RandomRestartHillClimbingOptimizer',
'RandomAnnealingOptimizer',
'ParallelTemperingOptimizer',
'ParticleSwarmOptimizer',
'EvolutionStrategyOptimizer',
'BayesianOptimizer',
'TreeStructuredParzenEstimators',
'DecisionTreeOptimizer'],
'gsl': ['nmsimplex', 'nmsimplex2'],
'horace': ['lm-lsqr'],
'levmar': [],
'lmfit': ['powell', 'cobyla', 'emcee', 'nelder', 'differential_evolution'],
'mantid': ['Simplex', 'FABADA'],
'matlab': ['Nelder-Mead Simplex'],
'matlab_curve': [],
'matlab_opt': [],
'matlab_stats': [],
'minuit': ['simplex'],
'nlopt': ['LN_BOBYQA', 'LN_NEWUOA', 'LN_NEWUOA_BOUND', 'LN_PRAXIS'],
'ralfit': [],
'scipy': ['Nelder-Mead', 'Powell', 'COBYLA'],
'scipy_go': ['differential_evolution'],
'scipy_ls': [None],
'theseus': []}
```
General (`general`):
```
{'bumps': ['amoeba', 'newton', 'de', 'dream'],
'ceres': [],
'dfo': [],
'gofit': [],
'gradient_free': ['HillClimbingOptimizer',
'RepulsingHillClimbingOptimizer',
'SimulatedAnnealingOptimizer',
'RandomSearchOptimizer',
'RandomRestartHillClimbingOptimizer',
'RandomAnnealingOptimizer',
'ParallelTemperingOptimizer',
'ParticleSwarmOptimizer',
'EvolutionStrategyOptimizer',
'BayesianOptimizer',
'TreeStructuredParzenEstimators',
'DecisionTreeOptimizer'],
'gsl': ['nmsimplex',
'nmsimplex2',
'conjugate_pr',
'conjugate_fr',
'vector_bfgs',
'vector_bfgs2',
'steepest_descent'],
'horace': [],
'levmar': [],
'lmfit': ['nelder',
'powell',
'cg',
'bfgs',
'newton',
'lbfgs',
'tnc',
'slsqp',
'differential_evolution',
'shgo',
'dual_annealing'],
'mantid': ['BFGS',
'Conjugate gradient (Fletcher-Reeves imp.)',
'Conjugate gradient (Polak-Ribiere imp.)',
'Damped GaussNewton',
'Simplex',
'SteepestDescent'],
'matlab': ['Nelder-Mead Simplex'],
'matlab_curve': [],
'matlab_opt': [],
'matlab_stats': [],
'minuit': ['migrad'],
'nlopt': ['LD_SLSQP', 'LD_VAR2', 'LD_VAR1', 'AUGLAG', 'AUGLAG_EQ'],
'ralfit': [],
'scipy': ['Nelder-Mead',
'Powell',
'CG',
'BFGS',
'Newton-CG',
'L-BFGS-B',
'TNC',
'SLSQP'],
'scipy_go': ['differential_evolution', 'shgo', 'dual_annealing'],
'scipy_ls': [None],
'theseus': []}
```
Simplex (`simplex`):
```
{'bumps': ['amoeba'],
'ceres': [],
'dfo': [],
'gofit': [],
'gradient_free': [],
'gsl': ['nmsimplex', 'nmsimplex2'],
'horace': [],
'levmar': [],
'lmfit': ['nelder'],
'mantid': ['Simplex'],
'matlab': ['Nelder-Mead Simplex'],
'matlab_curve': [],
'matlab_opt': [],
'matlab_stats': [],
'minuit': ['simplex'],
'nlopt': ['LN_NELDERMEAD', 'LN_SBPLX'],
'ralfit': [],
'scipy': ['Nelder-Mead'],
'scipy_go': [],
'scipy_ls': [],
'theseus': []}
```
Trust Region (`trust_region`):
```
{'bumps': ['lm-bumps', 'scipy-leastsq'],
'ceres': ['Levenberg_Marquardt', 'Dogleg'],
'dfo': ['dfols', 'dfogn'],
'gofit': [],
'gradient_free': [],
'gsl': ['lmder', 'lmsder'],
'horace': [],
'levmar': ['levmar'],
'lmfit': ['least_squares',
'trust-ncg',
'trust-exact',
'trust-krylov',
'trust-constr',
'dogleg'],
'mantid': ['Trust Region', 'Levenberg-Marquardt', 'Levenberg-MarquardtMD'],
'matlab': [],
'matlab_curve': ['Trust-Region', 'Levenberg-Marquardt'],
'matlab_opt': ['levenberg-marquardt', 'trust-region-reflective'],
'matlab_stats': ['Levenberg-Marquardt'],
'minuit': [],
'nlopt': ['LN_COBYLA', 'LD_CCSAQ', 'LD_MMA'],
'ralfit': ['gn', 'hybrid', 'newton', 'newton-tensor'],
'scipy': ['trust-ncg',
'trust-exact',
'trust-krylov',
'trust-constr',
'dogleg'],
'scipy_go': [],
'scipy_ls': ['lm-scipy', 'trf', 'dogbox'],
'theseus': []}
```
Levenberg-Marquardt (`levenberg-marquardt`):
```
{'bumps': ['lm-bumps', 'scipy-leastsq'],
'ceres': [],
'dfo': [],
'gofit': ['regularisation'],
'gradient_free': [],
'gsl': ['lmder', 'lmsder'],
'horace': ['lm-lsqr'],
'levmar': ['levmar'],
'lmfit': ['leastsq'],
'mantid': ['Levenberg-Marquardt', 'Levenberg-MarquardtMD'],
'matlab': [],
'matlab_curve': ['Levenberg-Marquardt'],
'matlab_opt': ['levenberg-marquardt'],
'matlab_stats': ['Levenberg-Marquardt'],
'minuit': [],
'nlopt': [],
'ralfit': ['gn', 'gn_reg'],
'scipy': [],
'scipy_go': [],
'scipy_ls': ['lm-scipy'],
'theseus': ['Levenberg_Marquardt']}
```
Gauss Newton (`gauss_newton`):
```
{'bumps': [],
'ceres': [],
'dfo': ['dfogn'],
'gofit': ['regularisation'],
'gradient_free': [],
'gsl': [],
'horace': [],
'levmar': [],
'lmfit': ['newton', 'tnc'],
'mantid': ['Damped GaussNewton'],
'matlab': [],
'matlab_curve': [],
'matlab_opt': [],
'matlab_stats': [],
'minuit': [],
'nlopt': ['LD_TNEWTON_PRECOND_RESTART',
'LD_TNEWTON_PRECOND',
'LD_TNEWTON_RESTART',
'LD_TNEWTON'],
'ralfit': ['gn', 'gn_reg'],
'scipy': [],
'scipy_go': [],
'scipy_ls': [],
'theseus': ['Gauss-Newton']}
```
BFGS (`bfgs`):
```
{'bumps': ['newton'],
'ceres': ['BFGS', 'LBFGS'],
'dfo': [],
'gofit': [],
'gradient_free': [],
'gsl': ['vector_bfgs', 'vector_bfgs2'],
'horace': [],
'levmar': [],
'lmfit': ['lbfgsb', 'bfgs'],
'mantid': ['BFGS'],
'matlab': [],
'matlab_curve': [],
'matlab_opt': [],
'matlab_stats': [],
'minuit': [],
'nlopt': ['LD_LBFGS'],
'ralfit': [],
'scipy': ['BFGS', 'L-BFGS-B'],
'scipy_go': [],
'scipy_ls': [],
'theseus': []}
```
Conjugate Gradient (`conjugate_gradient`):
```
{'bumps': [],
'ceres': ['Fletcher_Reeves', 'Polak_Ribiere', 'Hestenes_Stiefel'],
'dfo': [],
'gofit': [],
'gradient_free': [],
'gsl': ['conjugate_fr', 'conjugate_pr'],
'horace': [],
'levmar': [],
'lmfit': ['cg', 'newton', 'powell'],
'mantid': ['Conjugate gradient (Fletcher-Reeves imp.)',
'Conjugate gradient (Polak-Ribiere imp.)'],
'matlab': [],
'matlab_curve': [],
'matlab_opt': [],
'matlab_stats': [],
'minuit': [],
'nlopt': ['LN_COBYLA'],
'ralfit': [],
'scipy': ['CG', 'Newton-CG', 'Powell'],
'scipy_go': [],
'scipy_ls': [],
'theseus': []}
```
Steepest Descent (`steepest_descent`):
```
{'bumps': [],
'ceres': ['steepest_descent'],
'dfo': [],
'gofit': [],
'gradient_free': [],
'gsl': ['steepest_descent'],
'horace': [],
'levmar': [],
'lmfit': [],
'mantid': ['SteepestDescent'],
'matlab': [],
'matlab_curve': [],
'matlab_opt': [],
'matlab_stats': [],
'minuit': [],
'nlopt': [],
'ralfit': [],
'scipy': [],
'scipy_go': [],
'scipy_ls': [],
'theseus': []}
```
Global Optimization (`global_optimization`):
```
{'bumps': ['de', 'dream'],
'ceres': [],
'dfo': [],
'gofit': ['alternating', 'multistart'],
'gradient_free': ['HillClimbingOptimizer',
'RepulsingHillClimbingOptimizer',
'SimulatedAnnealingOptimizer',
'RandomSearchOptimizer',
'RandomRestartHillClimbingOptimizer',
'RandomAnnealingOptimizer',
'ParallelTemperingOptimizer',
'ParticleSwarmOptimizer',
'EvolutionStrategyOptimizer',
'BayesianOptimizer',
'TreeStructuredParzenEstimators',
'DecisionTreeOptimizer'],
'gsl': [],
'horace': [],
'levmar': [],
'lmfit': ['differential_evolution', 'ampgo', 'shgo', 'dual_annealing'],
'mantid': ['FABADA'],
'matlab': [],
'matlab_curve': [],
'matlab_opt': [],
'matlab_stats': [],
'minuit': [],
'nlopt': ['GN_DIRECT',
'GN_DIRECT_L',
'GN_DIRECT_L_RAND',
'GNL_DIRECT_NOSCAL',
'GN_DIRECT_L_NOSCAL',
'GN_DIRECT_L_RAND_NOSCAL',
'GN_ORIG_DIRECT',
'GN_ORIG_DIRECT_L',
'GN_CRS2_LM',
'G_MLSL_LDS',
'G_MLSL',
'GD_STOGO',
'GD_STOGO_RAND',
'GN_AGS',
'GN_ISRES'],
'ralfit': [],
'scipy': [],
'scipy_go': ['differential_evolution', 'shgo', 'dual_annealing'],
'scipy_ls': [],
'theseus': []}
```
##### Cost functions[](#cost-functions)
Fitbenchmarking supports multiple cost functions. These can be set via the `cost_func_type` option in [Fitting Options](index.html#fitting-option).
Fitbenchmarking is designed to work with problems that have the form
\[\min_p F(r(x,y,p)).\]
The function \(F(\cdot)\) is known as the cost function,
while the function \(r(x,u,p)\) is known as the *residual* of the cost function.
The residual will generally be zero if the fit was perfect.
Both of these quantities together define a cost function in FitBenchmarking.
The cost functions that are currently supported are:
* Non-linear least squares cost function
> *class* fitbenchmarking.cost_func.nlls_cost_func.NLLSCostFunc(*problem*)
> This defines the non-linear least squares cost function where, given a set
> of \(n\) data points \((x_i,y_i)\), associated errors \(e_i\),
> and a model function \(f(x,p)\), we find the optimal parameters in the
> root least-squares sense by solving:
> \[\min_p \sum_{i=1}^n \left(y_i - f(x_i, p)\right)^2\]
> where \(p\) is a vector of length \(m\), and we start from a
> given initial guess for the optimal parameters. More information on
> non-linear least squares cost functions can be found
> [here](https://en.wikipedia.org/wiki/Non-linear_least_squares).
>
* Weighted non-linear least squares cost function
> *class* fitbenchmarking.cost_func.weighted_nlls_cost_func.WeightedNLLSCostFunc(*problem*)
> This defines the weighted non-linear least squares cost function where,
> given a set of \(n\) data points \((x_i,y_i)\), associated errors
> \(e_i\), and a model function \(f(x,p)\), we find the optimal
> parameters in the root least-squares sense by solving:
> \[\min_p \sum_{i=1}^n
> \left(\frac{y_i - f(x_i, p)}{e_i}\right)^2\]
> where \(p\) is a vector of length \(m\), and we start from a
> given initial guess for the optimal parameters. More information on
> non-linear least squares cost functions can be found
> [here](https://en.wikipedia.org/wiki/Non-linear_least_squares).
>
* Hellinger non-linear least squares cost function
> *class* fitbenchmarking.cost_func.hellinger_nlls_cost_func.HellingerNLLSCostFunc(*problem*)
> This defines the Hellinger non-linear least squares cost function where,
> given a set of \(n\) data points \((x_i,y_i)\), associated errors
> \(e_i\), and a model function \(f(x,p)\), we find the optimal
> parameters in the Hellinger least-squares sense by solving:
> \[\min_p \sum_{i=1}^n
> \left(\sqrt{y_i} - \sqrt{f(x_i, p})\right)^2\]
> where \(p\) is a vector of length \(m\), and we start from a
> given initial guess for the optimal parameters. More information on
> non-linear least squares cost functions can be found
> [here](https://en.wikipedia.org/wiki/Non-linear_least_squares) and for
> the Hellinger distance measure see
> [here](https://en.wikipedia.org/wiki/Hellinger_distance).
>
* Poisson deviance cost function
> *class* fitbenchmarking.cost_func.poisson_cost_func.PoissonCostFunc(*problem*)
> This defines the Poisson deviance cost-function where,
> given the set of \(n\) data points \((x_i, y_i)\),
> and a model function \(f(x,p)\), we find the optimal parameters in the
> Poisson deviance sense by solving:
> \[\min_p \sum_{i=1}^n
> \left( y_i \left(\log{y_i} - \log{f(x_i, p)} \right)
> - \left( y_i - f(x_i, p) \right) \right)\]
> where \(p\) is a vector of length \(m\), and we start from a given
> initial guess for the optimal parameters.
> This cost function is intended for positive values.
> This cost function is not a least squares problem and as such will not work
> with least squares minimizers. Please use algorithm_type to select
> general solvers.
> See options docs ([Fitting Options](index.html#fitting-option)) for information on how to do this.
##### FitBenchmarking Output[](#fitbenchmarking-output)
FitBenchmarking produces tables and reports called support pages as outputs.
The links below give descriptions of these outputs.
###### Tables[](#tables)
####### Comparison Table[](#comparison-table)
*class* fitbenchmarking.results_processing.compare_table.CompareTable(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)
The combined results show the accuracy in the first line of the cell and the runtime on the second line of the cell.
####### Accuracy Table[](#accuracy-table)
*class* fitbenchmarking.results_processing.acc_table.AccTable(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)
The accuracy results are calculated by evaluating the cost function with the fitted parameters.
####### Runtime Table[](#runtime-table)
*class* fitbenchmarking.results_processing.runtime_table.RuntimeTable(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)
The timing results are calculated from an average (over num_runs) using the
[timeit](https://docs.python.org/2/library/timeit.html) module in python. num_runs is set in [FitBenchmarking Options](index.html#options).
####### Local Minimizer Table[](#local-minimizer-table)
*class* fitbenchmarking.results_processing.local_min_table.LocalMinTable(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)
The local min results shows a `True` or `False` value together with
\(\frac{|| J^T r||}{||r||}\). The `True` or `False` indicates whether the software finds a minimum with respect to the following criteria:
* \(||r|| \leq\) RES_TOL,
* \(|| J^T r|| \leq\) GRAD_TOL,
* \(\frac{|| J^T r||}{||r||} \leq\) GRAD_TOL,
where \(J\) and \(r\) are the Jacobian and residual of
\(f(x, p)\), respectively. The tolerances can be found in the results object.
####### Emissions Table[](#emissions-table)
*class* fitbenchmarking.results_processing.emissions_table.EmissionsTable(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)
The emissions (kg CO2eq) results are calculated from an average (over num_runs) using the
[codecarbon](https://mlco2.github.io/codecarbon/index.html) module.
num_runs is set in [FitBenchmarking Options](index.html#options).
Configuration for codecarbon is set in `.codecarbon.config`.
Please note that for tracking CPU power usage on Windows or Mac,
`Intel Power Gadget` shoud also be installed. For more information,
see the Methodology section of the [codecarbon docs](https://mlco2.github.io/codecarbon/methodology.html#cpu).
####### Display modes[](#display-modes)
The tables for `accuracy`, `runtime`, `emissions`, and `compare` have three display modes:
```
{ 'abs': 'Absolute values are displayed in the table.',
'both': 'Absolute and relative values are displayed in the table '
'in the format ``abs (rel)``',
'rel': 'Relative values are displayed in the table.'}
```
This can be set in the option file using the
[Comparison Mode](index.html#comparisonoption) option.
The [Local Minimizer Table](index.html#local-min) table is formatted differently, and doesn’t use this convention.
####### Performance profile[](#performance-profile)
Below the table there is a [Performance Profile](index.html#performance-profile).
###### Support Pages[](#support-pages)
In each of the tables, a fitting_report for an individual result can be accessed by clicking on the associated table cell. Clicking the problem name at the start of a row will open a [Problem Summary Page](index.html#problem-summary-page) for the problem as a whole.
####### Fitting Report[](#fitting-report)
The fitting report pages can be used to see more information about the problem and a given fit.
Each page represents a single fitting combination and is split into 2 sections.
######## Problem Outline[](#problem-outline)
First is the initial problem. Here you will see information about the function being fit and the set of initial parameters used for the fitting.
If plots are enabled (see [Make plots (make_plots)](index.html#makeplots)), you will also see a scatter plot of the data to fit with a line of the initial fit given to the minimizer.
######## Fitting Results[](#fitting-results)
The second section focusses on the results of the fitting. Here you will find the minimizer name and the final parameters for the fit found by the minimizer.
A plot of the fit is also shown with an overlaid best fit from whichever minimizer was found to produce the smallest error.
####### Problem Summary Page[](#problem-summary-page)
The problem summary page can be used to give an overview of the problem and solutions obtained.
######## Problem Outline[](#problem-outline)
First is the initial problem. Here you will see information about the function being fit and the set of initial parameters used for the fitting.
If plots are enabled (see [Make plots (make_plots)](index.html#makeplots)), you will also see a scatter plot of the data to fit with a line of the initial fit given to the minimizer.
######## Comparison[](#comparison)
The main plot on the page shows a comparison of all fits at once.
This can be used to compare how cost functions perform for a problem accross all minimizers.
This uses colours to identify the cost function for each fit and shows all fits on a single graph. The best minimizer for each cost function is more pronounced on the plot.
This should not be used to identify the best individual fit, but can be a good indication of whether cost functions are biased to certain datapoints in the input.
######## Best Plots[](#best-plots)
The page ends with an expandable section for each cost function tested, which gives the parameter values and plot of the best fit obtained for that cost function.
##### Benchmark problems[](#benchmark-problems)
To help choose between the different minimizers, we have made some curated problems available to use with FitBenchmarking. It is also straightforward to add custom data sets to the benchmark, if that is more appropriate; see
[Problem Definition Files](index.html#problem-def) for specifics of how to add additional problems in a supported file format.
Downloads
**You can download a folder containing all examples here:**
[`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/examples.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/examples.tar.gz)
Individual problem sets are also available to download below.
We supply some standard nonlinear least-squares test problems in the form of the [NIST nonlinear regression set](https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml)
and the relevant problems from the [CUTEst problem set](https://github.com/ralna/CUTEst/wiki),
together with some real-world data sets that have been extracted from [Mantid](https://www.mantidproject.org) and
[SASView](https://www.sasview.org) usage examples and system tests.
We’ve made it possible to extend this list by following the steps in
[Adding Fitting Problem Definition Types](index.html#parsers).
Each of the test problems contain:
* a data set consisting of points \((x_i, y_i)\) (with optional errors on \(y_i\), \(\sigma_i\));
* a definition of the fitting function, \(f({\boldsymbol{\beta}};x)\); and
* (at least) one set of initial values for the function parameters \({\boldsymbol{\beta}}_0\).
If a problem doesn’t have observational errors (e.g., the NIST problem set), then FitBenchmarking can approximate errors by taking \(\sigma_i = \sqrt{y_i}\).
Alternatively, there is an option to disregard errors and solve the unweighted nonlinear least-squares problem, setting
\(\sigma_i = 1.0\) irrespective of what has been passed in with the problem data.
As we work with scientists in other areas, we will extend the problem suite to encompass new categories. The FitBenchmarking framework has been designed to make it easy to integrate new problem sets, and any additional data added to the framework can be tested with any and all of the available fitting methods.
Currently FitBenchmarking ships with data from the following sources:
###### CrystalField Data (Mantid)[](#crystalfield-data-mantid)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/CrystalField.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/CrystalField.tar.gz)
This folder (also found in examples/benchmark_problems/CrystalField) contains a test set for inelastic neutron scattering measurements of transitions between crystal field energy levels.
This problem has 8 parameters, and fits around 200 data points.
Warning
The external package Mantid must be installed to run this data set. See [Installing External Software](index.html#external-instructions) for details.
###### CUTEst (NIST files)[](#cutest-nist-files)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/CUTEst.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/CUTEst.tar.gz)
This folder (also found in examples/benchmark_problems/CUTEst) contains several problems from the [CUTEst](https://github.com/ralna/CUTEst)
continuous optimization testing environment which have been converted to the NIST format.
These problems all have 8 unknown parameters, and fit around 15 data points with the exception of `VESUVIOLS` which fits around 1000.
###### Data Assimilation[](#data-assimilation)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/Data_Assimilation.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/Data_Assimilation.tar.gz)
This folder (also found in examples/benchmark_problems/Data_Assimilation) contains two examples using the data assimilation problem definition in fitbenchmarking.
These examples follow the method set out in
[this paper](https://www.researchgate.net/publication/324956488_Data_assimilation_approach_to_analysing_systems_of_ordinary_differential_equations).
These data files are synthetic and have been generated as an initial test of the minimizers. We plan to extend this with time series data which is more representative of the expectations for data assimilation in future updates.
These problems have either 2 or 3 unknown parameters, and fit either 100 or 1000 data points for `Simplified ANAC` and `Lorentz` problems respectively.
###### Powder Diffraction Data (SIF files)[](#powder-diffraction-data-sif-files)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/DIAMOND_SIF.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/DIAMOND_SIF.tar.gz)
These problems (also found in the folder examples/benchmark_problems/DIAMOND_SIF)
contain data from powder diffraction experiments. The data supplied comes from the [I14 Hard X-Ray Nanoprobe](https://www.diamond.ac.uk/Instruments/Imaging-and-Microscopy/I14.html) beamline at the Diamond Light source, and has been supplied in the SIF format used by [CUTEst](https://github.com/ralna/CUTEst).
These problems have either 66 or 99 unknown parameters, and fit around 5,000 data points.
Warning
The external packages CUTEst and pycutest must be installed to run this data set. See [Installing External Software](index.html#external-instructions) for details.
###### MultiFit Data (Mantid)[](#multifit-data-mantid)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/MultiFit.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/MultiFit.tar.gz)
These problems (also found in the folder examples/benchmark_problems/MultiFit)
contain data for testing the MultiFit functionality of Mantid. This contains a simple data set, on which two fits are done, and a calibration dataset from the [MuSR](https://www.isis.stfc.ac.uk/Pages/musr.aspx)
spectrometer at ISIS, on which there are four fits available.
See [The MultiFit documentation](index.html#multifit) for more details.
Basic Multifit has 3 unknown parameters, and fits 40 data points.
MUSR62260 has 18 unknown parameters, and fits around 8000 data points.
Warning
The external package Mantid must be installed to run this data set. See [Installing External Software](index.html#external-instructions) for details.
This will also only work using the Mantid Minimizers.
###### Muon Data (Mantid)[](#muon-data-mantid)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/Muon.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/Muon.tar.gz)
These problems (also found in the folder examples/benchmark_problems/Muon)
contain data from Muon spectrometers. The data supplied comes from the [HiFi](https://www.isis.stfc.ac.uk/Pages/hifi.aspx) and
[EMU](https://www.isis.stfc.ac.uk/Pages/EMU.aspx) instruments at STFC’s ISIS Neutron and Muon source, and has been supplied in the format that [Mantid](https://mantidproject.org/) uses to process the data.
These problems have between 5 and 13 unknown parameters, and fit around 1,000 data points.
Warning
The external package Mantid must be installed to run this data set. See [Installing External Software](index.html#external-instructions) for details.
###### Neutron Data (Mantid)[](#neutron-data-mantid)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/Neutron.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/Neutron.tar.gz)
These problems (also found in the folder examples/benchmark_problems/Neutron)
contain data from Neutron scattering experiments. The data supplied comes from the [Engin-X](https://www.isis.stfc.ac.uk/Pages/Engin-X.aspx),
[GEM](https://www.isis.stfc.ac.uk/Pages/gem.aspx),
[eVS](https://www.isis.stfc.ac.uk/Pages/Vesuvio.aspx), and
[WISH](https://www.isis.stfc.ac.uk/Pages/wish.aspx) instruments at STFC’s ISIS Neutron and Muon source, and has been supplied in the format that [Mantid](https://mantidproject.org/) uses to process the data.
The size of these problems differ massively.
The Engin-X calibration problems find 7 unknown parameters, and fit to 56-67 data points.
The Engin-X vanadium problems find 4 unknown parameters, and fit to around 14,168 data points.
The eVS problems find 8 unknown parameters, and fit to 1,025 data points.
The GEM problem finds 105 unknown parameters, and fits to 1,314 data points.
The WISH problems find 5 unknown parameters, and fit to 512 data points.
Warning
The external package Mantid must be installed to run this data set. See [Installing External Software](index.html#external-instructions) for details.
###### NIST[](#nist)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/NIST.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/NIST.tar.gz)
These problems (also found in the folder examples/benchmark_problems/NIST) contain data from the [NIST Nonlinear Regression](https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml) test set.
These problems are split into low, average and high difficulty.
They have between 2 and 9 unknown parameters, and fit between 6 and 250 data points.
###### Poisson Data[](#poisson-data)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/Poisson.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/Poisson.tar.gz)
These problems (also found in the folder examples/benchmark_problems/Poisson) contain both simulated and real data measuring particle counts. The real data is ISIS muon data, and the simulated datasets have been made to represent counts using models provided by both Mantid and Bumps.
These problems have between 4 and 6 unknown parameters, and around 350, 800,
and 2000 data points for simulated bumps, HIFI_160973, and simulated mantid respectively.
Warning
The external package Mantid must be installed to run this data set. See [Installing External Software](index.html#external-instructions) for details.
###### Small Angle Scattering (SASView)[](#small-angle-scattering-sasview)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/SAS_modelling.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/SAS_modelling.tar.gz)
These problems (also found in the folder examples/benchmark_problems/SAS_modelling/1D) are two data sets from small angle scattering experiments.
These are from fitting data to a
[cylinder](https://www.sasview.org/docs/user/models/cylinder.html),
and have been supplied in the format that [SASView](https://www.sasview.org)
uses to process the data.
These have 6 unknown parameters, and fit to either 20 or 54 data points.
Warning
The external package `sasmodels` must be installed to run this data set. See [Installing External Software](index.html#external-instructions) for details.
###### CUTEst (SIF files)[](#cutest-sif-files)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/SIF.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/SIF.tar.gz)
This directory (also found in the folder examples/benchmark_problems/SIF) contain
[SIF files](https://github.com/ralna/SIFDecode)
encoding least squares problems from the [CUTEst](https://github.com/ralna/CUTEst)
continuous optimization testing environment.
These are from a wide range of applications. They have between 2 and 9 unknown parameters, and for the most part fit between 6 and 250 data points, although the VESUVIO examples (from the [VESUVIO](https://www.isis.stfc.ac.uk/Pages/Vesuvio.aspx)
instrument at ISIS) have 1,025 data points (with 8 unknown parameters).
Warning
The external packages CUTEst and pycutest must be installed to run this data set. See [Installing External Software](index.html#external-instructions) for details.
###### SIF_GO[](#sif-go)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/SIF_GO.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/SIF_GO.tar.gz)
This directory (also found in the folder examples/benchmark_problems/SIF_GO) contains
[SIF files](https://github.com/ralna/SIFDecode)
encoding least squares problems from the [CUTEst](https://github.com/ralna/CUTEst)
continuous optimization testing environment.
All of these problems have been modified, with finite bounds added for all parameters,
making the problems appropriate for testing global optimization solvers. The bounds that have been added to each problem are the same as those used in SciPy’s
[global optimization benchmark functions](https://github.com/scipy/scipy/tree/master/benchmarks/benchmarks/go_benchmark_functions).
These problems have between 3 and 7 unknown parameters, and fit between 9 and 37 data points.
Warning
The external packages CUTEst and pycutest must be installed to run this data set. See [Installing External Software](index.html#external-instructions) for details.
###### Simple tests[](#simple-tests)
**Download** [`.zip`](https://numerical.rl.ac.uk/fitbenchmarking/simple_tests.zip)
or [`.tar.gz`](https://numerical.rl.ac.uk/fitbenchmarking/simple_tests.tar.gz)
This folder (also found in examples/benchmark_problems/simple_tests) contains a number of simple tests with known, and easy to obtain,
answers. We recommend that this is used to test any new minimizers that are added, and also that any new parsers reimplement these data sets and models (if possible).
These problems have 3 or 4 unknown parameters, and around 100 data points.
##### FitBenchmarking Options[](#fitbenchmarking-options)
The default behaviour of FitBenchmarking can be changed by supplying an options file. The default values of these options,
and how to override them, are given in the pages below.
###### Fitting Options[](#fitting-options)
Options that control the benchmarking process are set here.
####### Software (`software`)[](#software-software)
Software is used to select the fitting software to benchmark, this should be a newline-separated list. Available options are:
* `bumps` (default software)
* `ceres` (external software – see [Installing External Software](index.html#external-instructions))
* `CUTEst` (external software – see [Installing External Software](index.html#external-instructions))
* `dfo` (external software – see [Extra dependencies](index.html#extra-dependencies))
* `gofit` (external software – see [Extra dependencies](index.html#extra-dependencies))
* `gradient_free` (external software – see [Extra dependencies](index.html#extra-dependencies))
* `gsl` (external software – see [Installing External Software](index.html#external-instructions))
* `horace` (external software – see [Installing External Software](index.html#external-instructions))
* `levmar` (external software – see [Extra dependencies](index.html#extra-dependencies))
* `lmfit` (external software – see [Extra dependencies](index.html#extra-dependencies))
* `mantid` (external software – see [Installing External Software](index.html#external-instructions))
* `matlab` (external software – see [Installing External Software](index.html#external-instructions))
* `matlab_curve` (external software – see [Installing External Software](index.html#external-instructions))
* `matlab_opt` (external software – see [Installing External Software](index.html#external-instructions))
* `matlab_stats` (external software – see [Installing External Software](index.html#external-instructions))
* `minuit` (external software – see [Extra dependencies](index.html#extra-dependencies))
* `nlopt` (external software – see [Extra dependencies](index.html#extra-dependencies))
* `ralfit` (external software – see [Installing External Software](index.html#external-instructions))
* `scipy` (default software)
* `scipy_ls` (default software)
* `scipy_go`
* `theseus` (external software – see [Installing External Software](index.html#external-instructions))
Default software options are `scipy` and `scipy_ls`
```
[FITTING]
software: bumps
dfo
minuit
scipy
scipy_ls
scipy_go
```
Warning
Software must be listed to be here to be run.
Any minimizers set in [Minimizer Options](index.html#minimizer-option) will not be run if the software is not also present in this list.
####### Number of minimizer runs (`num_runs`)[](#number-of-minimizer-runs-num-runs)
Sets the number of runs to average each fit over.
Default is `5`
```
[FITTING]
num_runs: 5
```
####### Algorithm type (`algorithm_type`)[](#algorithm-type-algorithm-type)
This is used to select what type of algorithm is used within a specific software.
For a full list of available minimizers for each algorithm type, see [Algorithm Types of Available Minimizers](index.html#minimizer-types).
The options are:
* `all` - all minimizers
* `ls` - least-squares fitting algorithms
* `deriv_free` - derivative free algorithms (these are algorithms that cannot use information about derivatives – e.g., the `Simplex` method in `Mantid`),
see [Derivative Free](index.html#deriv-free).
* `general` - minimizers which solve a generic min f(x)
* `simplex` - derivative free simplex based algorithms e.g. Nelder-Mead, see [Simplex](index.html#simplex)
* `trust_region` - algorithms which employ a trust region approach, see [Trust Region](index.html#trust-region)
* `levenberg-marquardt` - minimizers that use the Levenberg Marquardt algorithm, see [Levenberg-Marquardt](index.html#levenberg-marquardt).
* `gauss_newton` - minimizers that use the Gauss Newton algorithm, see [Gauss-Newton](index.html#gauss-newton)
* `bfgs` - minimizers that use the BFGS algorithm, see [BFGS](index.html#bfgs)
* `conjugate_gradient` - Conjugate Gradient algorithms, see [Conjugate Gradient](index.html#conjugate-gradient)
* `steepest_descent` - Steepest Descent algorithms, see [Steepest Descent](index.html#steepest-descent)
* `global_optimization` - Global Optimization algorithms
Default is `all`
```
[FITTING]
algorithm_type: all
```
Warning
Choosing an option other than `all` may deselect certain minimizers set in the options file
####### Jacobian method (`jac_method`)[](#jacobian-method-jac-method)
This sets the Jacobian used.
Choosing multiple options via a new line seperated list will result in all combinations being benchmarked.
Current Jacobian methods are:
* `analytic` - uses the analytic Jacobian extracted from the fitting problem.
* `scipy` - uses [SciPy’s finite difference Jacobian approximations](index.html#scipy-jac).
* `default` - uses the default derivative approximation implemented in the minimizer.
* `numdifftools` - uses the python package [numdifftools](index.html#numdifftools-jac).
Default is `default`
```
[FITTING]
jac_method: scipy
```
Warning
Currently analytic Jacobians are only available for problems that use the cutest and NIST parsers.
####### Hessian method (`hes_method`)[](#hessian-method-hes-method)
This sets the Hessian used.
Choosing multiple options via a new line seperated list will result in all combinations being benchmarked.
Current Hessian methods are:
* `default` - Hessian information is not passed to minimizers
* `analytic` - uses the analytic Hessian extracted from the fitting problem.
* `scipy` - uses [SciPy’s finite difference approximations](index.html#scipy-hes).
* `numdifftools` - uses the python package [numdifftools](index.html#numdifftools-hes).
Default is `default`
```
[FITTING]
hes_method: default
```
Warning
Currently analytic Hessians are only available for problems that use the cutest and NIST parsers.
####### Cost function (`cost_func_type`)[](#cost-function-cost-func-type)
This sets the cost functions to be used for the given data.
Choosing multiple options via a new line seperated list will result in all combinations being benchmarked.
Currently supported cost functions are:
* `nlls` - This sets the cost function to be non-weighted non-linear least squares, [`NLLSCostFunc`](index.html#fitbenchmarking.cost_func.nlls_cost_func.NLLSCostFunc).
* `weighted_nlls` - This sets the cost function to be weighted non-linear least squares, [`WeightedNLLSCostFunc`](index.html#fitbenchmarking.cost_func.weighted_nlls_cost_func.WeightedNLLSCostFunc).
* `hellinger_nlls` - This sets the cost function to be the Hellinger cost function, [`HellingerNLLSCostFunc`](index.html#fitbenchmarking.cost_func.hellinger_nlls_cost_func.HellingerNLLSCostFunc).
* `poisson` - This sets the cost function to be the Poisson Deviation cost function, [`PoissonCostFunc`](index.html#fitbenchmarking.cost_func.poisson_cost_func.PoissonCostFunc).
Default is `weighted_nlls`
```
[FITTING]
cost_func_type: weighted_nlls
```
####### Maximum Runtime (`max_runtime`)[](#maximum-runtime-max-runtime)
This sets the maximum runtime a minimizer has to solve one benchmark problem num_runs number of times, where num_runs is another option a user can set. If the minimizer is still running after the maximum time has elapsed, then this result will be skipped and FitBenchmarking will move on to the next minimizer / benchmark dataset combination. The main purpose of this option is to get to result tables quicker by limit the runtime.
max_runtime is set by specifying a number in unit of seconds. Please note that depending on platform the time specified with max_runtime may not match entirely with the absolute run-times specified in tables. Hence you may have to experiment a bit with this option to get the cutoff you want.
Default is 600 seconds
```
[FITTING]
max_runtime: 600
```
###### Minimizer Options[](#minimizer-options)
This section is used to declare the minimizers to use for each fitting software. If a fitting software has been selected in [Fitting Options](index.html#fitting-option)
then a default set of minimizers for that solver will be run unless alternative minimizer options have been set. All minimizers for a software are included on the default list of minimizers unless otherwise stated.
Warning
Options set in this section will only have an effect if the related software is also set in [Fitting Options](index.html#fitting-option) (either explicitly, or as a default option).
####### Bumps (`bumps`)[](#bumps-bumps)
[Bumps](https://bumps.readthedocs.io) is a set of data fitting (and Bayesian uncertainty analysis) routines.
It came out of the University of Maryland and NIST as part of the DANSE
(*Distributed Data Analysis of Neutron Scattering Experiments*) project.
FitBenchmarking currently supports the Bumps minimizers:
* [Nelder-Mead Simplex](https://bumps.readthedocs.io/en/latest/guide/optimizer.html#nelder-mead-simplex) (`amoeba`)
* [Levenberg-Marquardt](https://bumps.readthedocs.io/en/latest/guide/optimizer.html#fit-lm) (`lm-bumps`) This is mpfit, a translation of MINPACK to Python.
* [Quasi-Newton BFGS](https://bumps.readthedocs.io/en/latest/guide/optimizer.html#quasi-newton-bfgs) (`newton`)
* [Differential Evolution](https://bumps.readthedocs.io/en/latest/guide/optimizer.html#differential-evolution) (`de`)
* [scipy’s leastsq](https://bumps.readthedocs.io/en/latest/guide/optimizer.html#fit-lm) (`scipy-leastsq`) This calls scipy’s Levenberg-Marquardt method. Note that this was the default method for lm prior to Bumps v0.8.2.
* [DiffeRential Evolution Adaptive Metropolis](https://bumps.readthedocs.io/en/latest/guide/optimizer.html#dream) (`dream`)
**Licence** The main licence file for Bumps is [here](https://github.com/bumps/bumps/blob/master/LICENSE.txt). Individual files have their own copyright and licence
– if you plan to incorporate this in your own software you should first check that the licences used are compatible.
**Links** [GitHub - bumps](https://github.com/bumps/bumps)
The Bumps minimizers are set as follows:
```
[MINIMIZERS]
bumps: amoeba
lm-bumps
newton
de
scipy-leastsq
dream
```
Warning
The additional dependency Bumps must be installed for this to be available;
See [Extra dependencies](index.html#extra-dependencies).
Note
de is not included in the default list of minimizers for bumps. To run this solver, you must explicitly set the minimizer as seen above.
####### Ceres Solver (`ceres`)[](#ceres-solver-ceres)
[Ceres Solver](http://ceres-solver.org/) is an open source C++ library for modeling and solving large, complicated optimization problems.
It can be used to solve Non-linear Least Squares problems with bounds constraints and general unconstrained optimization problems.
FitBenchmarking currently supports the Ceres Solver minimizers:
* [Levenberg-Marquardt](http://ceres-solver.org/nnls_solving.html#levenberg-marquardt) (`Levenberg-Marquardt`)
* [Dogleg](http://ceres-solver.org/nnls_solving.html#dogleg) (`Dogleg`)
* [Steepest Descent](http://ceres-solver.org/nnls_solving.html#line-search-methods) (`steepest_descent`)
* [BFGS algorithm](http://ceres-solver.org/nnls_solving.html#line-search-methods) (`BFGS`)
* [LBFGS algorithm](http://ceres-solver.org/nnls_solving.html#line-search-methods) (`LBFGS`)
* [Fletcher-Reeves Non Linear Conjugate-Gradient](http://ceres-solver.org/nnls_solving.html#line-search-methods) (`Fletcher_Reeves`)
* [Polak-Ribiere Non Linear Conjugate-Gradient](http://ceres-solver.org/nnls_solving.html#line-search-methods) (`Polak_Ribiere`)
* [Hestenes-Stiefel Non Linear Conjugate-Gradient](http://ceres-solver.org/nnls_solving.html#line-search-methods) (`Hestenes_Stiefel`)
**Licence** Ceres Solver is available under the new BSD licence – details can be found [here](http://ceres-solver.org/license.html)
**Links** [Ceres Solver](http://ceres-solver.org/) [PyCeres - Ceres Python Bindings](https://github.com/Edwinem/ceres_python_bindings)
The Ceres Solver minimizers are set as follows:
```
[MINIMIZERS]
ceres: Levenberg_Marquardt
Dogleg
BFGS
LBFGS
steepest_descent
Fletcher_Reeves
Polak_Ribiere
Hestenes_Stiefel
```
Warning
The additional dependency Ceres Solver must be installed for this to be available;
See [Extra dependencies](index.html#extra-dependencies).
Note
The PyCeres currently only works with Ceres Solver versions 2.0.0
####### DFO (`dfo`)[](#dfo-dfo)
There are two Derivative-Free Optimization packages, [DFO-LS](http://people.maths.ox.ac.uk/robertsl/dfols/userguide.html) and
[DFO-GN](http://people.maths.ox.ac.uk/robertsl/dfogn/userguide.html).
They are derivative free optimization solvers that were developed by <NAME> at the University of Oxford, in conjunction with NAG. They are particularly well suited for solving noisy problems.
FitBenchmarking currently supports the DFO minimizers:
* [Derivative-Free Optimizer for Least Squares](http://people.maths.ox.ac.uk/robertsl/dfols/userguide.html) (`dfols`)
* [Derivative-Free Gauss-Newton Solver](http://people.maths.ox.ac.uk/robertsl/dfogn/userguide.html) (`dfogn`)
**Licence** Both [DFO-GN](https://github.com/numericalalgorithmsgroup/dfogn/blob/master/LICENSE.txt) and [DFO-LS](https://github.com/numericalalgorithmsgroup/dfols/blob/master/LICENSE.txt) are available under the GPL-3 licence. A proprietary licence is also available from [NAG](https://www.nag.com/content/worldwide-contact-information) .
**Links** [GitHub - DFO-GN](https://github.com/numericalalgorithmsgroup/dfogn) [GitHub - DFO-LS](https://github.com/numericalalgorithmsgroup/dfols)
The DFO minimizers are set as follows:
```
[MINIMIZERS]
dfo: dfols
dfogn
```
Warning
Additional dependencies DFO-GN and DFO-LS must be installed for these to be available;
See [Extra dependencies](index.html#extra-dependencies).
####### GOFit (`gofit`)[](#gofit-gofit)
[GOFit](https://github.com/ralna/GOFit) is a package of C++ algorithms with Python interfaces designed for the global optimization of parameters in curve fitting, i.e. for nonlinear least-squares problems arising from curve fitting. It is also included with Mantid since release 6.5.
FitBenchmarking currently supports the GOFit minimizers:
* Multistart Global Minimizer (`multistart`)
* Alternating Multistart Global Minimizer (`alternating`)
* Quadratic Regularisation Local Minimizer (`regularisation`)
**Links** [Documentation](https://ralna.github.io/GOFit/)
**Licence** GOFit is available under a [3-clause BSD Licence](https://github.com/ralna/GOFit/blob/master/LICENSE)
The GOFit minimizers are set as follows:
```
[MINIMIZERS]
gofit: multistart
alternating
regularisation
```
Note
The alternating minimizer currently only supports Crystal Field problems.
Warning
The additional dependency GOFit must be installed to use these minimizers. See [Extra dependencies](index.html#extra-dependencies).
####### Gradient-Free-Optimizers (`gradient_free`)[](#gradient-free-optimizers-gradient-free)
[Gradient-Free-Optimizers](https://github.com/SimonBlanke/Gradient-Free-Optimizers) are a collection of gradient-free methods capable of solving various optimization problems. Please note that Gradient-Free-Optimizers must be run with problems that have finite bounds on all parameters.
* Hill Climbing (`HillClimbingOptimizer`)
* Repulsing Hill Climbing (`RepulsingHillClimbingOptimizer`)
* Simulated Annealing (`SimulatedAnnealingOptimizer`)
* Random Search (`RandomSearchOptimizer`)
* Random Restart Hill Climbing (`RandomRestartHillClimbingOptimizer`)
* Random Annealing (`RandomAnnealingOptimizer`)
* Parallel Tempering (`ParallelTemperingOptimizer`)
* Particle Swarm (`ParticleSwarmOptimizer`)
* Evolution Strategy (`EvolutionStrategyOptimizer`)
* Bayesian (`BayesianOptimizer`)
* Tree Structured Parzen Estimators (`TreeStructuredParzenEstimators`)
* Decision Tree (`DecisionTreeOptimizer`)
**Licence** The Gradient-Free-Optimizers package is available under an [MIT Licence](https://github.com/SimonBlanke/Gradient-Free-Optimizers/blob/master/LICENSE) .
The gradient_free minimizers are set as follows:
```
[MINIMIZERS]
gradient_free: HillClimbingOptimizer
RepulsingHillClimbingOptimizer
SimulatedAnnealingOptimizer
RandomSearchOptimizer
RandomRestartHillClimbingOptimizer
RandomAnnealingOptimizer
ParallelTemperingOptimizer
ParticleSwarmOptimizer
EvolutionStrategyOptimizer
BayesianOptimizer
TreeStructuredParzenEstimators
DecisionTreeOptimizer
```
Warning
The additional dependency Gradient-Free-Optimizers must be installed for this to be available;
See [Extra dependencies](index.html#extra-dependencies).
Note
BayesianOptimizer, TreeStructuredParzenEstimators and DecisionTreeOptimizer may be slow running and so are not run by default when gradient_free software is selected. To run these minimizers you must explicity set them as seen above.
####### GSL (`gsl`)[](#gsl-gsl)
The [GNU Scientific Library](https://www.gnu.org/software/gsl/) is a numerical library that provides a wide range of mathematical routines. We call GSL using the [pyGSL Python interface](https://sourceforge.net/projects/pygsl/).
The GSL routines have a number of parameters that need to be chosen, often without default suggestions.
We have taken the values as used by Mantid.
We provide implementations for the following packages in the [multiminimize](https://www.gnu.org/software/gsl/doc/html/multimin.html) and [multifit](https://www.gnu.org/software/gsl/doc/html/nls.html) sections of the library:
* [Levenberg-Marquardt (unscaled)](http://pygsl.sourceforge.net/api/pygsl.html#pygsl.multifit__nlin.lmder) (`lmder`)
* [Levenberg-Marquardt (scaled)](http://pygsl.sourceforge.net/api/pygsl.html#pygsl.multifit_nlin.lmsder) (`lmsder`)
* [Nelder-Mead Simplex Algorithm](http://pygsl.sourceforge.net/api/pygsl.html#pygsl.multiminimize.nmsimplex) (`nmsimplex`)
* [Nelder-Mead Simplex Algorithm (version 2)](http://pygsl.sourceforge.net/api/pygsl.html#pygsl.multiminimize.nmsimplex2) (`nmsimplex2`)
* [Polak-Ribiere Conjugate Gradient Algorithm](http://pygsl.sourceforge.net/api/pygsl.html#pygsl.multiminimize.conjugate_pr) (`conjugate_pr`)
* [Fletcher-Reeves Conjugate-Gradient](http://pygsl.sourceforge.net/api/pygsl.html#pygsl.multiminimize.conjugate_fr) (`conjugate_fr`)
* [The vector quasi-Newton BFGS method](http://pygsl.sourceforge.net/api/pygsl.html#pygsl.multiminimize.vector_bfgs) (`vector_bfgs`)
* [The vector quasi-Newton BFGS method (version 2)](http://pygsl.sourceforge.net/api/pygsl.html#pygsl.multiminimize.vector_bfgs2) (`vector_bfgs2`)
* [Steepest Descent](http://pygsl.sourceforge.net/api/pygsl.html#pygsl.multiminimize.steepest_descent) (`steepest_descent`)
**Links** [SourceForge PyGSL](http://pygsl.sourceforge.net/)
**Licence** The GNU Scientific Library is available under the [GPL-3 licence](https://www.gnu.org/licenses/gpl-3.0.html) .
The GSL minimizers are set as follows:
```
[MINIMIZERS]
gsl: lmsder
lmder
nmsimplex
nmsimplex2
conjugate_pr
conjugate_fr
vector_bfgs
vector_bfgs2
steepest_descent
```
Warning
The external packages GSL and pygsl must be installed to use these minimizers.
####### Horace (`horace`)[](#horace-horace)
[Horace](https://pace-neutrons.github.io/Horace/) is described as *a suite of programs for the visiualization and analysis from time-of-flight neutron inelastic scattering spectrometers.* We currently support:
* Levenberg-Marquardt (`lm-lsqr`)
**Licence** Matlab must be installed to use Horace within FitBenchmarking, which is a
[proprietary product](https://www.mathworks.com/pricing-licensing.html).
Horace is made available under the the [GPL-3 licence](https://www.gnu.org/licenses/gpl-3.0.html).
```
[MINIMIZERS]
horace: lm-lsqr
```
Note
If you have a non standard installation of Horace please set the HORACE_LOCATION (e.g on IDAaaS).
Warning
The Horace Toolbox and MATLAB must be installed for this to be available; see [Installing External Software](index.html#external-instructions).
####### Mantid (`mantid`)[](#mantid-mantid)
[Mantid](https://www.mantidproject.org) is a framework created to manipulate and analyze neutron scattering and muon spectroscopy data.
It has support for a number of minimizers, most of which are from GSL.
* [BFGS](https://docs.mantidproject.org/nightly/fitting/fitminimizers/BFGS.html) (`BFGS`)
* [Conjugate gradient (Fletcher-Reeves)](https://docs.mantidproject.org/nightly/fitting/fitminimizers/FletcherReeves.html) (`Conjugate gradient (Fletcher-Reeves imp.)`)
* [Conjugate gradient (Polak-Ribiere)](https://docs.mantidproject.org/nightly/fitting/fitminimizers/PolakRibiere.html) (`Conjugate gradient (Polak-Ribiere imp.)`)
* [Damped GaussNewton](https://docs.mantidproject.org/nightly/fitting/fitminimizers/DampedGaussNewton.html) (`Damped GaussNewton`)
* [FABADA](https://docs.mantidproject.org/nightly/concepts/FABADA.html) (`FABADA`)
* [Levenberg-Marquardt algorithm](https://docs.mantidproject.org/nightly/fitting/fitminimizers/LevenbergMarquardt.html) (`Levenberg-Marquardt`)
* [Levenberg-Marquardt MD](https://docs.mantidproject.org/nightly/fitting/fitminimizers/LevenbergMarquardtMD.html) (`Levenberg-MarquardtMD`) - An implementation of Levenberg-Marquardt intended for MD workspaces, where work is divided into chunks to achieve a greater efficiency for a large number of data points.
* [Simplex](https://docs.mantidproject.org/nightly/fitting/fitminimizers/Simplex.html) (`Simplex`)
* [SteepestDescent](https://docs.mantidproject.org/nightly/fitting/fitminimizers/GradientDescent.html) (`SteepestDescent`)
* [Trust Region](https://docs.mantidproject.org/nightly/fitting/fitminimizers/TrustRegion.html) (`Trust Region`) - An implementation of one of the algorithms available in RALFit.
> **Links** [GitHub - Mantid](https://github.com/mantidproject/mantid) [Mantid’s Fitting Docs](https://docs.mantidproject.org/nightly/algorithms/Fit-v1.html)
**Licence** Mantid is available under the [GPL-3 licence](https://github.com/mantidproject/mantid/blob/master/LICENSE.txt) .
The Mantid minimizers are set as follows:
```
[MINIMIZERS]
mantid: BFGS
Conjugate gradient (Fletcher-Reeves imp.)
Conjugate gradient (Polak-Ribiere imp.)
Damped GaussNewton
FABADA
Levenberg-Marquardt
Levenberg-MarquardtMD
Simplex
SteepestDescent
Trust Region
```
Warning
The external package Mantid must be installed to use these minimizers.
####### Levmar (`levmar`)[](#levmar-levmar)
The [levmar](http://users.ics.forth.gr/~lourakis/levmar/) package which implements the Levenberg-Marquardt method for nonlinear least-squares.
We interface via the python interface [available on PyPI](https://pypi.org/project/levmar/).
* Levenberg-Marquardt with supplied Jacobian (`levmar`) - the Levenberg-Marquardt method
**Licence** Levmar is available under the [GPL-3 licence](http://www.gnu.org/copyleft/gpl.html) . A paid licence for proprietary commerical use is [available from the author](http://users.ics.forth.gr/~lourakis/levmar/faq.html#Q37) .
The levmar minimizer is set as follows:
```
[MINIMIZERS]
levmar: levmar
```
Warning
The additional dependency levmar must be installed for this to be available;
See [Extra dependencies](index.html#extra-dependencies). This package also requires the BLAS and LAPACK libraries to be present on the system.
####### LMFIT (`lmfit`)[](#lmfit-lmfit)
The [lmfit](https://lmfit.github.io/lmfit-py/index.html) package provides simple tools to help you build complex fitting models for non-linear least-squares problems and apply these models to real data. Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. It builds on and extends many of the optimization methods of
[scipy.optimize](https://docs.scipy.org/doc/scipy/reference/optimize.html).
* Levenberg-Marquardt (`leastsq`)
* Least-Squares minimization, using Trust Region Reflective method (`least_squares`)
* Differential evolution (`differential_evolution`)
* Adaptive Memory Programming for Global Optimization (`ampgo`)
* Nelder-Mead (`nelder`)
* L-BFGS-B (`lbfgsb`)
* Powell (`powell`)
* Conjugate-Gradient (`cg`)
* Newton-CG (`newton`)
* Cobyla (`cobyla`)
* BFGS (`bfgs`)
* Truncated Newton (`tnc`)
* Newton-CG trust-region (`trust-ncg`)
* Nearly exact trust-region (`trust-exact`)
* Newton GLTR trust-region (`trust-krylov`)
* Trust-region for constrained optimization (`trust-constr`)
* Dog-leg trust-region (`dogleg`)
* Sequential Linear Squares Programming (`slsqp`)
* [Maximum likelihood via Monte-Carlo Markov Chain](https://emcee.readthedocs.io/en/stable/) (`emcee`)
* Simplicial Homology Global Optimization (`shgo`)
* Dual Annealing optimization (`dual_annealing`)
**Licence** LMFIT is available the new BSD-3 licence – details can be found [here](https://lmfit.github.io/lmfit-py/installation.html#copyright-licensing-and-re-distribution)
The lmfit minimizer is set as follows:
```
[MINIMIZERS]
lmfit: differential_evolution
powell
cobyla
slsqp
emcee
nelder
least_squares
trust-ncg
trust-exact
trust-krylov
trust-constr
dogleg
leastsq
newton
tnc
lbfgsb
bfgs
cg
ampgo
shgo
dual_annealing
```
Note
The shgo solver is particularly slow running and should generally be avoided. As a result, this solver is not run by default when lmfit software is selected. In order to run this minimizer, you must explicitly set it as above.
Warning
emcee uses a Markov Chain Monte Carlo package and assumes that the prior is Uniform. This may not perform well for certain fitting problems.
####### Matlab (`matlab`)[](#matlab-matlab)
We call the [fminsearch](https://uk.mathworks.com/help/matlab/ref/fminsearch.html)
function from [MATLAB](https://uk.mathworks.com/products/matlab.html), using the MATLAB Engine API for Python.
* Nelder-Mead Simplex (`Nelder-Mead Simplex`)
**Licence** Matlab is a [proprietary product](https://www.mathworks.com/pricing-licensing.html) .
The matlab minimizer is set as follows:
```
[MINIMIZERS]
matlab: Nelder-Mead Simplex
```
Warning
MATLAB must be installed for this to be available; See [Installing External Software](index.html#external-instructions).
####### Matlab Curve Fitting Toolbox (`matlab_curve`)[](#matlab-curve-fitting-toolbox-matlab-curve)
We call the [fit](https://uk.mathworks.com/help/curvefit/fit.html)
function from the [MATLAB Curve Fitting Toolbox](https://uk.mathworks.com/help/curvefit/index.html),
using the MATLAB Engine API for Python.
* Levenberg-Marquardt (`Levenberg-Marquardt`)
* Trust-Region (`Trust-Region`)
**Licence** Matlab and the Curve Fitting Toolbox are both [proprietary products](https://www.mathworks.com/pricing-licensing.html) .
The matlab_curve minimizers are set as follows:
```
[MINIMIZERS]
matlab_curve: Levenberg-Marquardt
Trust-Region
```
Warning
MATLAB Curve Fitting Toolbox must be installed for this to be available; See [Installing External Software](index.html#external-instructions).
####### Matlab Optimization Toolbox (`matlab_opt`)[](#matlab-optimization-toolbox-matlab-opt)
We call the [lsqcurvefit](https://uk.mathworks.com/help/optim/ug/lsqcurvefit.html)
function from the [MATLAB Optimization Toolbox](https://uk.mathworks.com/products/optimization.html),
using the MATLAB Engine API for Python.
* Levenberg-Marquardt (`levenberg-marquardt`)
* Trust-Region-Reflective (`trust-region-reflective`)
**Licence** Matlab and the Optimization Toolbox are both [proprietary products](https://www.mathworks.com/pricing-licensing.html) .
The matlab_opt minimizers are set as follows:
```
[MINIMIZERS]
matlab_opt: levenberg-marquardt
trust-region-reflective
```
Warning
MATLAB Optimization Toolbox must be installed for this to be available; See [Installing External Software](index.html#external-instructions).
####### Matlab Statistics Toolbox (`matlab_stats`)[](#matlab-statistics-toolbox-matlab-stats)
We call the [nlinfit](https://uk.mathworks.com/help/stats/nlinfit.html)
function from the [MATLAB Statistics Toolbox](https://uk.mathworks.com/products/statistics.html),
using the MATLAB Engine API for Python.
* Levenberg-Marquardt (`Levenberg-Marquardt`)
**Licence** Matlab and the Statistics Toolbox are both [proprietary products](https://www.mathworks.com/pricing-licensing.html) .
The matlab_stats minimizer is set as follows:
```
[MINIMIZERS]
matlab_stats: Levenberg-Marquardt
```
Warning
MATLAB Statistics Toolbox must be installed for this to be available; See [Installing External Software](index.html#external-instructions).
####### Minuit (`minuit`)[](#minuit-minuit)
CERN developed the [Minuit 2](https://root.cern.ch/doc/master/Minuit2Page.html) package to find the minimum value of a multi-parameter function, and also to compute the uncertainties.
We interface via the python interface [iminuit](https://iminuit.readthedocs.io) with support for the 2.x series.
* [Minuit’s MIGRAD](https://iminuit.readthedocs.io/en/stable/reference.html#iminuit.Minuit.migrad) (`migrad`)
* [Minuit’s SIMPLEX](https://iminuit.readthedocs.io/en/stable/reference.html#iminuit.Minuit.simplex) (`simplex`)
**Links** [Github - iminuit](https://github.com/scikit-hep/iminuit)
**Licence** iminuit is released under the [MIT licence](https://github.com/scikit-hep/iminuit/blob/develop/LICENSE), while Minuit 2 is [LGPL v2](https://github.com/root-project/root/blob/master/LICENSE) .
The Minuit minimizers are set as follows:
```
[MINIMIZERS]
minuit: migrad
simplex
```
Warning
The additional dependency Minuit must be installed for this to be available;
See [Extra dependencies](index.html#extra-dependencies).
####### NLopt (`nlopt`)[](#nlopt-nlopt)
[NLopt](https://nlopt.readthedocs.io/en/latest/) is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms
* Bound Optimization BY Quadratic Approximation (`LN_BOBYQA`)
* NEW Unconstrained Optimization Algorithm (`LN_NEWUOA`)
* NEW Unconstrained Optimization Algorithm, bound algorithm (`LN_NEWUOA_BOUND`)
* Principal-axis method (`LN_PRAXIS`)
* Sequential Least Squares Programming (`LD_SLSQP`)
* Shifted limited-memory variable-metric (rank 1) (`LD_VAR1`)
* Shifted limited-memory variable-metric (rank 2) (`LD_VAR2`)
* Augmented Lagrangian local (`AUGLAG`)
* Augmented Lagrangian local (equality constraints) (`AUGLAG_EQ`)
* Nelder-Mead Simplex (`LN_NELDERMEAD`)
* Subplex (`LN_SBPLX`)
* Constrained Optimization BY Linear Approximations (`LN_COBYLA`)
* Conservative convex separable approximation (`LD_CCSAQ`)
* Method of Moving Asymptotes (`LD_MMA`)
* Newton (`LD_TNEWTON`)
* Newton preconditioned by the low-storage BFGS algorithm (`LD_TNEWTON_PRECOND`)
* Newton with steepest-descent restarting (`LD_TNEWTON_RESTART`)
* Newton preconditioned by the low-storage BFGS algorithm with steepest-descent restarting (`LD_TNEWTON_PRECOND_RESTART`)
* LBFGS, and derivative-free algorithm (`LD_LBFGS`)
* DIviding RECTangles (`GN_DIRECT`)
* DIviding RECTangles (locally biased) (`GN_DIRECT_L`)
* DIviding RECTangles (locally biased which uses some randomization) (`GN_DIRECT_L_RAND`)
* DIviding RECTangles (unscaled variant) (`GNL_DIRECT_NOSCAL`)
* DIviding RECTangles (locally biased and unscaled variant) (`GN_DIRECT_L_NOSCAL`)
* DIviding RECTangles (locally biased, unscaled variant which uses some randomization) (`GN_DIRECT_L_RAND_NOSCAL`)
* DIviding RECTangles (based on the original Fortran code by Gablonsky et al. (1998-2001)) (`GN_ORIG_DIRECT`)
* DIviding RECTangles (based on the original Fortran code by Gablonsky et al. (1998-2001) and locally biased ) (`GN_ORIG_DIRECT_L`)
* Controlled Random Search (`GN_CRS2_LM`)
* Multi-Level Single-Linkage, low-discrepancy sequence (`G_MLSL_LDS`)
* Multi-Level Single-Linkage (`G_MLSL`)
* Stochastic Global Optimization (`GD_STOGO`)
* Stochastic Global Optimizatiom, randomized variant (`GD_STOGO_RAND`)
* AGS (`GN_AGS`)
* Improved Stochastic Ranking Evolution Strategy (`GN_ISRES`)
The Nlopt minimizers are set as follows:
```
[MINIMIZERS]
nlopt: LN_BOBYQA
LN_NEWUOA
LN_NEWUOA_BOUND
LN_PRAXIS
LD_SLSQP
LD_VAR2
LD_VAR1
AUGLAG
AUGLAG_EQ
LN_NELDERMEAD
LN_SBPLX
LN_COBYLA
LD_CCSAQ
LD_MMA
LD_TNEWTON_PRECOND_RESTART
LD_TNEWTON_PRECOND
LD_TNEWTON_RESTART
LD_TNEWTON
LD_LBFGS
GN_DIRECT
GN_DIRECT_L
GN_DIRECT_L_RAND
GNL_DIRECT_NOSCAL
GN_DIRECT_L_NOSCAL
GN_DIRECT_L_RAND_NOSCAL
GN_ORIG_DIRECT
GN_ORIG_DIRECT_L
GN_CRS2_LM
G_MLSL_LDS
G_MLSL
GD_STOGO
GD_STOGO_RAND
GN_AGS
GN_ISRES
```
Note
The global optimization solvers are not run by default when nlopt software is selected. In order to run these minimizers, you must explicitly set them as above.
Note
The following 4 minimizers need a local optimizer selected to run. This has been set to use LD_LBFGS.
```
[MINIMIZERS]
nlopt: AUGLAG
AUGLAG_EQ
G_MLSL_LDS
G_MLSL
```
####### RALFit (`ralfit`)[](#ralfit-ralfit)
[RALFit](https://ralfit.readthedocs.io/projects/Fortran/en/latest/)
is a nonlinear least-squares solver, the development of which was funded by the EPSRC grant Least-Squares: Fit for the Future. RALFit is designed to be able to take advantage of higher order derivatives, although only first order derivatives are currently utilized in FitBenchmarking.
* Gauss-Newton, trust region method (`gn`)
* Hybrid Newton/Gauss-Newton, trust region method (`hybrid`)
* Newton, trust region method (`newton`)
* Newton-tensor, trust region method (`newton-tensor`)
* Gauss-Newton, regularization (`gn_reg`)
* Hybrid Newton/Gauss-Newton, regularization (`hybrid_reg`)
* Newton, regularization (`newton_reg`)
* Newton-tensor, regularization (`newton-tensor_reg`)
Note that the Newton-tensor methods take significantly longer than the other options to run (but may give a better solution in some cases). For this reason, they are not included in the default minimizers for RALFit, but must be turned on in the options file.
**Links** [Github - RALFit](https://github.com/ralna/ralfit/). RALFit’s Documentation on: [Gauss-Newton/Hybrid models](https://ralfit.readthedocs.io/projects/Fortran/en/latest/method.html#the-models), [the trust region method](https://ralfit.readthedocs.io/projects/Fortran/en/latest/method.html#the-trust-region-method) and [The regularization method](https://ralfit.readthedocs.io/projects/C/en/latest/method.html#regularization)
**Licence** RALFit is available under a [3-clause BSD Licence](https://github.com/ralna/RALFit/blob/master/LICENCE)
The RALFit minimizers are set as follows:
```
[MINIMIZERS]
ralfit: gn
gn_reg
hybrid
hybrid_reg
```
Warning
The external package RALFit must be installed to use these minimizers.
####### SciPy (`scipy`)[](#scipy-scipy)
[SciPy](https://www.scipy.org) is the standard python package for mathematical software. In particular, we use the [minimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html)
solver for general minimization problems from the optimization chapter of SciPy’s library. Currently we only use the algorithms that do not require Hessian information as inputs.
* [Nelder-Mead algorithm](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-neldermead.html) (`Nelder-Mead`)
* [Powell algorithm](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-powell.html) (`Powell`)
* [Conjugate gradient algorithm](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-cg.html) (`CG`)
* [BFGS algorithm](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-bfgs.html) (`BFGS`)
* [Newton-CG algorithm](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-newtoncg.html) (`Newton-CG`)
* [L-BFGS-B algorithm](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html) (`L-BFGS-B`)
* [Truncated Newton (TNC) algorithm](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-tnc.html) (`TNC`)
* [Sequential Least SQuares Programming](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-slsqp.html) (`SLSQP`)
* [Constrained Optimization BY Linear Approximations](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-cobyla.html) (`COBYLA`)
* [Newton-CG trust-region](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-trustncg.html) (`trust-ncg`)
* [Nearly exact trust-region](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-trustexact.html) (`trust-exact`)
* [Newton Generalized Lanczos Trust Region](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-trustkrylov.html) (`trust-krylov`)
* [Trust-region for constrained optimization](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-trustconstr.html) (`trust-constr`)
* [Dog-leg trust-region](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-dogleg.html) (`dogleg`)
**Links** [Github - SciPy minimize](https://github.com/scipy/scipy/blob/master/scipy/optimize/_minimize.py)
**Licence** Scipy is available under a [3-clause BSD Licence](https://github.com/scipy/scipy/blob/master/LICENSE.txt). Individual packages may have their own (compatible) licences, as listed [here](https://github.com/scipy/scipy/blob/master/LICENSES_bundled.txt).
The SciPy minimizers are set as follows:
```
[MINIMIZERS]
scipy: Nelder-Mead
Powell
CG
BFGS
Newton-CG
L-BFGS-B
TNC
SLSQP
COBYLA
trust-ncg
trust-exact
trust-krylov
trust-constr
dogleg
```
Note
The Hessian enabled solvers are not run by default when scipy software is selected. In order to run these minimizers, you must explicitly set them as above.
####### SciPy LS (`scipy_ls`)[](#scipy-ls-scipy-ls)
[SciPy](https://www.scipy.org) is the standard python package for mathematical software. In particular, we use the [least_squares](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html#scipy.optimize.least_squares)
solver for Least-Squares minimization problems from the optimization chapter of SciPy’s library.
* Levenberg-Marquardt with supplied Jacobian (`lm-scipy`) - a wrapper around MINPACK
* The Trust Region Reflective algorithm (`trf`)
* A dogleg algorithm with rectangular trust regions (`dogbox`)
**Links** [Github - SciPy least_squares](https://github.com/scipy/scipy/blob/master/scipy/optimize/_lsq/least_squares.py)
**Licence** Scipy is available under a [3-clause BSD Licence](https://github.com/scipy/scipy/blob/master/LICENSE.txt). Individual packages many have their own (compatible) licences, as listed [here](https://github.com/scipy/scipy/blob/master/LICENSES_bundled.txt).
The SciPy least squares minimizers are set as follows:
```
[MINIMIZERS]
scipy_ls: lm-scipy
trf
dogbox
```
####### SciPy GO (`scipy_go`)[](#scipy-go-scipy-go)
[SciPy](https://www.scipy.org) is the standard python package for mathematical software. In particular, we use the [Global Optimization](https://docs.scipy.org/doc/scipy/reference/optimize.html#global-optimization)
solvers for global optimization problems from the optimization chapter of SciPy’s library.
* [Differential Evolution (derivative-free)](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution) (`differential_evolution`)
* [Simplicial Homology Global Optimization (SHGO)](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo) (`shgo`)
* [Dual Annealing](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html#scipy.optimize.dual_annealing) (`dual_annealing`)
**Links** [Github - SciPy optimization](https://github.com/scipy/scipy/blob/master/scipy/optimize/)
**Licence** Scipy is available under a [3-clause BSD Licence](https://github.com/scipy/scipy/blob/master/LICENSE.txt). Individual packages may have their own (compatible) licences, as listed [here](https://github.com/scipy/scipy/blob/master/LICENSES_bundled.txt).
The SciPy global optimization minimizers are set as follows:
```
[MINIMIZERS]
scipy_go: differential_evolution
shgo
dual_annealing
```
Note
The shgo solver is particularly slow running and should generally be avoided. As a result, this solver is not run by default when scipy_go software is selected. In order to run this minimizer, you must explicitly set it as above.
####### Theseus (`theseus`)[](#theseus-theseus)
[Theseus](https://sites.google.com/view/theseus-ai/) is an efficient application-agnostic library for building custom nonlinear optimization layers in PyTorch to support constructing various problems in robotics and vision as end-to-end differentiable architectures.
* <NAME> (`Levenberg_Marquardt`)
* Gauss Newton (`Gauss-Newton`)
**Links** [Paper- Theseus optimization](https://arxiv.org/pdf/2207.09442.pdf/)
**Licence** Theseus is available under a [MIT licence](https://github.com/facebookresearch/theseus/blob/main/LICENSE).
The theseus minimizers are set as follows:
```
[MINIMIZERS]
theseus: Levenberg_Marquardt
Gauss-Newton
```
Note
We strongly recommend you install Theseus in a venv or conda environment with Python 3.7-3.9
###### Jacobian Options[](#jacobian-options)
The Jacobian section allows you to control which methods for computing Jacobians the software uses.
####### Analytic (`analytic`)[](#analytic-analytic)
Analytic Jacobians can only be used for specific [Problem Definition Files](index.html#problem-def). Currently the supported formats are cutest and NIST. The only option is:
* `default` - use the analytic derivative provided by a supported format.
Default is `default`
```
[JACOBIAN]
analytic: default
```
####### SciPy (`scipy`)[](#scipy-scipy)
Calculates the Jacobian using the numerical Jacobian in SciPy, this uses `scipy.optimize._numdiff.approx_derivative`. The supported options are:
* `2-point` - use the first order accuracy forward or backward difference.
* `3-point` - use central difference in interior points and the second order accuracy forward or backward difference near the boundary.
* `cs` - use a complex-step finite difference scheme. This assumes that the user function is real-valued and can be analytically continued to the complex plane. Otherwise, produces bogus results.
Default is `2-point`
**Licence** SciPy is available under a [3-clause BSD Licence](https://github.com/scipy/scipy/blob/master/LICENSE.txt). Individual packages may have their own (compatible) licences, as listed [here](https://github.com/scipy/scipy/blob/master/LICENSES_bundled.txt).
```
[JACOBIAN]
scipy: 2-point
```
####### Solver Default Jacobian (`default`)[](#solver-default-jacobian-default)
This uses the approximation of the Jacobian that is used by default in the minimizer,
and will vary between solvers. If the minimizer requires the user to pass a Jacobian,
a warning will be printed to the screen and the [SciPy (scipy)](#scipy-jac) 2-point approximation will be used. The only option is:
* `default` - use the default derivative approximation provided by the software.
Default is `default`
```
[JACOBIAN]
default: default
```
####### Numdifftools (`numdifftools`)[](#numdifftools-numdifftools)
Calculates the Jacobian using the python package `numdifftools`.
We allow the user to change the method used, but other options
(e.g, the step size generator and the order of the approximation) are set the defaults.
The supported options are:
* `central` - central differencing. Almost as accurate as complex, but with no restriction on the type of function.
* `forward` - forward differencing.
* `backward` - backward differencing.
* `complex` - based on the complex-step derivative method of [Lyness and Moler](http://epubs.siam.org/doi/abs/10.1137/0704019). Usually the most accurate, provided the function is analytic.
* `multicomplex` - extends complex method using multicomplex numbers. (see, e.g., [Lantoine, Russell, Dargent (2012)](https://dl.acm.org/doi/10.1145/2168773.2168774)).
Default is `central`.
**Licence** `numdifftools` is available under a [3-clause BSD Licence](https://github.com/pbrod/numdifftools/blob/master/LICENSE.txt).
```
[JACOBIAN]
numdifftools: central
```
###### Hessian Options[](#hessian-options)
The Hessian section allows you to control which methods for computing Hessians the software uses.
####### Analytic (`analytic`)[](#analytic-analytic)
Analytic Hessians can only be used for specific [Problem Definition Files](index.html#problem-def). Currently the supported formats are cutest and NIST. The only option is:
* `default` - use the analytic derivative provided by a supported format.
Default is `default`
```
[HESSIAN]
analytic: default
```
####### SciPy (`scipy`)[](#scipy-scipy)
Calculates the Hessian from the Jacobian using the finite differencing in SciPy, this uses `scipy.optimize._numdiff.approx_derivative`. The supported options are:
* `2-point` - use the first order accuracy forward or backward difference.
* `3-point` - use central difference in interior points and the second order accuracy forward or backward difference near the boundary.
* `cs` - use a complex-step finite difference scheme. This assumes that the user function is real-valued and can be analytically continued to the complex plane. Otherwise, produces bogus results.
Default is `2-point`
**Licence** SciPy is available under a [3-clause BSD Licence](https://github.com/scipy/scipy/blob/master/LICENSE.txt). Individual packages may have their own (compatible) licences, as listed [here](https://github.com/scipy/scipy/blob/master/LICENSES_bundled.txt).
```
[HESSIAN]
scipy: 2-point
```
####### Default Hessian (`default`)[](#default-hessian-default)
Hessian information is not passed to minimizers. The only option is:
* `default` - don’t pass Hessian information to minimizers.
Default is `default`
```
[HESSIAN]
default: default
```
####### Numdifftools (`numdifftools`)[](#numdifftools-numdifftools)
Calculates the Hessian from the Jacobian using the python package `numdifftools`.
We allow the user to change the method used, but other options
(e.g, the step size generator and the order of the approximation) are set to the defaults.
The supported options are:
* `central` - central differencing. Almost as accurate as complex, but with no restriction on the type of function.
* `forward` - forward differencing.
* `backward` - backward differencing.
* `complex` - based on the complex-step derivative method of [Lyness and Moler](http://epubs.siam.org/doi/abs/10.1137/0704019). Usually the most accurate, provided the function is analytic.
* `multicomplex` - extends complex method using multicomplex numbers. (see, e.g., [Lantoine, Russell, Dargent (2012)](https://dl.acm.org/doi/10.1145/2168773.2168774)).
Default is `central`.
**Licence** `numdifftools` is available under a [3-clause BSD Licence](https://github.com/pbrod/numdifftools/blob/master/LICENSE.txt).
```
[HESSIAN]
numdifftools: central
```
###### Output Options[](#output-options)
The output section contains options to control how results are outputted and presented.
####### Results directory (`results_dir`)[](#results-directory-results-dir)
This is used to select where the output should be saved. If the
[results directory command line argument](index.html#change-results-directory)
is provided, this option is overridden.
Default is `fitbenchmarking_results`
```
[OUTPUT]
results_dir: fitbenchmarking_results
```
####### Make plots (`make_plots`)[](#make-plots-make-plots)
This allows the user to decide whether or not to create plots during runtime.
Toggling this to False will be much faster on large data sets.
Default is `True` (`yes`/`no` can also be used)
```
[OUTPUT]
make_plots: yes
```
####### Colourmap (`colour_map`)[](#colourmap-colour-map)
Specifies the name of the colourmap the user wishes to use, e.g. `magma`, `viridis`, `OrRd`. Options are:
* Any colourmap from the library in `matplotlib`, see the complete library [here](https://matplotlib.org/stable/gallery/color/colormap_reference.html).
* Appending `_r` to the end of the name will reverse the colourmap.
* The following sequential colourmaps are recommended:
Default colourmap is `magma_r`
```
[OUTPUT]
colour_map: magma_r
```
####### Colourmap Range (`cmap_range`)[](#colourmap-range-cmap-range)
A two-element list used to specify the lower and upper limit of the chosen colourmap. Options are:
* `[lower_limit, upper_limit]` where limits consider the full colourscale limits to be 0 and 1, so any pair of values must fall within this range.
* Limits should be introduced to make the **white text** readable, see the following example.
Default for `magma` is `[0.2, 0.8]` (suitability depends on colourmap)
```
[OUTPUT]
colour_map: magma_r cmap_range: [0.2, 0.8]
```
####### Colour Upper Limit (`colour_ulim`)[](#colour-upper-limit-colour-ulim)
Controls how relatively poorly a minimizer has to perform in order to receive the worst colour. For example,
a value of 100 would mean that any performance greater than or equal to 100 times worse than the best minimizer would receive the worst colour. This ensures that colour scale is not compromised by especially poor relative results. Options are:
* Any float between `1` and `np.inf`
* Recommended value `100`
Default is `100`
```
[OUTPUT]
colour_map: magma_r cmap_range: [0.2, 0.8]
colour_ulim: 100
```
####### Comparison mode (`comparison_mode`)[](#comparison-mode-comparison-mode)
This selects the mode for displaying values in the resulting table options are `abs`, `rel`, `both`:
* `abs` indicates that the absolute values should be displayed
* `rel` indicates that the values should all be relative to the best result
* `both` will show data in the form “abs (rel)”
Default is `both`
```
[OUTPUT]
comparison_mode: both
```
####### Table type (`table_type`)[](#table-type-table-type)
This selects the types of tables to be produced in FitBenchmarking.
Options are:
* `acc` indicates that the resulting table should contain the chi squared values for each of the minimizers.
* `runtime` indicates that the resulting table should contain the runtime values for each of the minimizers.
* `compare` indicates that the resulting table should contain both the chi squared value and runtime value for each of the minimizers. The tables produced have the chi squared values on the top line of the cell and the runtime on the bottom line of the cell.
* `local_min` indicates that the resulting table should return true if a local minimum was found, or false otherwise.
The value of \(\frac{|| J^T r||}{||r||}\) for those parameters is also returned.
The output looks like `{bool} (norm_value)`, and the colouring is red for false and cream for true.
This option is only meaningful for least-squares cost functions.
* `emissions` indicates that the resulting table should contain the CO2 emissions for each of the minimizers.
Default is `acc`, `runtime`, `compare`, `local_min`, and `emissions`.
```
[OUTPUT]
table_type: acc
runtime
compare
local_min
emissions
```
###### Logging Options[](#logging-options)
The logging section contains options to control how fitbenchmarking logs information.
####### Logging file name (`file_name`)[](#logging-file-name-file-name)
This specifies the file path to write the logs to.
Default is `fitbenchmarking.log`
```
[LOGGING]
file_name: fitbenchmarking.log
```
####### Logging append (`append`)[](#logging-append-append)
This specifies whether to log in append mode or not.
If append mode is active, the log file will be extended with each subsequent run, otherwise the log will be cleared after each run.
Default is `False` (`yes`/`no` can also be used)
```
[LOGGING]
append: no
```
####### Logging level (`level`)[](#logging-level-level)
This specifies the minimum level of logging to display on console during runtime.
Options are (from most logging to least):
* `NOTSET`
* `DEBUG`
* `INFO`
* `WARNING`
* `ERROR`
* `CRITICAL`
Default is `INFO`
```
[LOGGING]
level: INFO
```
####### Logging external output (`external_output`)[](#logging-external-output-external-output)
This selects the amount of information displayed from third-parties.
There are 3 options:
* `display`: Print information from third-parties to the stdout stream during a run.
* `log_only`: Print information to the log file but not the stdout stream.
* `debug`: Do not intercept third-party use of output streams.
Default is `log_only`
```
[LOGGING]
append: log_only
```
The options file must be a `.ini` formatted file
([see here](https://docs.python.org/3/library/configparser.html#supported-ini-file-structure)).
Some example files can be found in the `examples` folder of the source, which is also available to download at [Benchmark problems](index.html#benchmarkproblems).
##### Problem Definition Files[](#problem-definition-files)
In FitBenchmarking, problems can be defined using several file formats.
The `examples/benchmark_problems` directory holds a collection of these that can be used for reference.
More information on the supported formats can be found on the following pages.
###### CUTEst File Format[](#cutest-file-format)
The CUTEst file format in FitBenchmarking is a slight modification of the
[SIF format](http://www.numerical.rl.ac.uk/lancelot/sif/sif.html).
Specifically, the data points, errors, and the number of variables must be defined in such a way to allow FitBenchmarking to access this data; see below.
In FitBenchmarking, all SIF files are assumed to be CUTEst problems.
These problems are a subset of the problems in the
[CUTEr/st Test Problem Set](http://www.cuter.rl.ac.uk/Problems/mastsif.shtml),
which may have been adapted to work with FitBenchmarking.
The SIF file format is very powerful, and CUTEst will work with arbitrary variable names, however for FitBenchmarking, these must match a set of expected variable names.
**Licence** This file format needs PyCUTEst and the packages ARCHDefs, CUTEst and SIFDECODE to be installed.
PyCUTEst is available under the
[GPL-3](https://github.com/jfowkes/pycutest/blob/master/LICENSE) licence.
[ARCHDEFS](https://github.com/ralna/ARCHDefs/blob/master/LICENSE),
[CUTEst](https://github.com/ralna/CUTEst/blob/master/LICENSE) and
[SIFDECODE](https://github.com/ralna/SIFDecode/blob/master/LICENSE)
are available under an LGPL (v2.1 or later) licence.
####### Modifications to the SIF format for FitBenchmarking problems[](#modifications-to-the-sif-format-for-fitbenchmarking-problems)
In order for FitBenchmarking to access the data, the SIF files must be written using the following conventions.
######## Defining Data[](#defining-data)
Data should be defined using the format:
```
RE X<idx> <val_x>
RE Y<idx> <val_y>
RE E<idx> <val_error>
```
where `<idx>` is the index of the data point, and `<val_x>`, `<val_y>`,
and `<val_error>` are the values attributed to it.
Usually, `<idx>` will range from 1 to `<num_x>`, with that defined as:
```
IE M <num_x>
```
If `<idx>` does not start at 1, the following lines can be used to specify the range:
```
IE MLOWER <min_idx>
IE MUPPER <max_idx>
```
######## Defining Variables[](#defining-variables)
For the free variables in functions, we use the convention:
```
IE N <num_vars>
```
This is used to tell FitBenchmarking how many degrees of freedom we need to fit.
In some cases variables will be vectors, and the number of degrees of freedom will be greater, most problems use `NVEC` as a convention to input the number of vectors.
####### Support for Bounds[](#support-for-bounds)
Parameter ranges can be added to SIF files using the [BOUNDS](https://www.numerical.rl.ac.uk/lancelot/sif/node26.html)
indicator card.
Currently in Fitbenchmarking, problems with parameter ranges can be handled by SciPy, Bumps, Minuit, Mantid,
Matlab Optimization Toolbox, DFO, Levmar and RALFit fitting software. Please note that the following Mantid minimizers currently throw an exception when parameter ranges are used: BFGS, Conjugate gradient
(Fletcher-Reeves imp.), Conjugate gradient (Polak-Ribiere imp.) and SteepestDescent.
###### Native File Format[](#native-file-format)
In FitBenchmarking, the native file format is used to read IVP, Mantid, and SASView problems.
In this format, data is separated from the function. This allows running the same dataset against multiple different models to assess which is the most appropriate.
Examples of native problems are:
```
# Fitbenchmark Problem software = 'ivp'
name = 'Lorentz'
description = 'A simple lorentz system for testing the ivp parser. Exact results should be 10, 28, 8/3.'
input_file = 'lorentz3d.txt'
function = 'module=functions/lorentz,func=lorentz3d,step=0.1,sigma=11,r=30,b=3'
plot_scale = 'linear'
```
```
# FitBenchmark Problem#
software = 'Mantid'
name = 'HIFI 113856'
description = 'An example of (full) detector calibration for the HIFI instrument'
input_file = 'HIFIgrouped_113856.txt'
function = 'name=FlatBackground,A0=0;name=DynamicKuboToyabe,BinWidth=0.050000000000000003,Asym=0.2,Delta=0.2,Field=0,Nu=0.1'
fit_ranges = {'x': [0.1, 16]}
```
```
# FitBenchmark Problem software = 'SASView'
name = '1D cylinder (synthetic neutron) IV0'
description = 'A first iteration synthetic dataset generated for the 1D cylinder SASView model in the fashion of neutron small angle scattering experiments. Generated on Fri May 28 10:31:19 2021.'
input_file = '1D_cylinder_20_400_nosmearing_neutron_synth.txt'
function = 'name=cylinder,radius=35.0,length=350.0,background=0.0,scale=1.0,sld=4.0,sld_solvent=1.0'
plot_scale = 'loglog'
```
These examples show the basic structure in which the file starts with a comment indicating it is a FitBenchmark problem followed by key-value pairs. Available keys are described below:
softwareEither ‘IVP’, ‘Mantid’, ‘SasView’, or ‘Horace’ (case insensitive).
This defines whether to use an IVP format, Mantid, or SasView to generate the model.
The ‘Mantid’ software also supports Mantid’s MultiFit functionality, which requires the parameters listed here to be defined slightly differently.
More information can be found in [Native File Format (Mantid MultiFit)](index.html#multifit).
For information on the ‘Horace’ format, see [Horace File Format](index.html#horace-format).
**Licence** Mantid is available under a
[GPL-3 Licence](https://github.com/mantidproject/mantid/blob/master/LICENSE.txt).
The component of SasView we use is SasModels, which is available under a
[BSD 3-clause](https://github.com/SasView/sasmodels/blob/master/LICENSE.txt) licence.
nameThe name of the problem.
This will be used as a unique reference so should not match other names in the dataset. A sanitised version of this name will also be used in filenames with commas stripped out and spaces replaced by underscores.
descriptionA description of the dataset.
This will be displayed in the Problem Summary Pages and Fitting Reports produced by a benchmark.
input_fileThe name of a file containing the data to fit.
The file must be in a subdirectory named `data_files`, and should have the form:
```
header
x11 [x12 [x13 ...]] y11 [y12 [y13 ...]] [e11 [e12 ...]]
x21 [x22 [x23 ...]] y21 [y22 [y23 ...]] [e21 [e22 ...]]
...
```
Mantid uses the convention of `# X Y E` as the header and SASView uses the convention `<X> <Y> <E>`, although neither of these are enforced.
The error column is optional in this format.
If the data contains multiple inputs or outputs, the header must be written in one of the above conventions with the labels as “x”, “y”, or “e” followed by a number. An example of this can be seen in
`examples/benchmark_problems/Data_Assimilation/data_files/lorentz.txt`
plot_scaleThe scale of the x and y axis for the plots. The options are ‘loglog’, ‘logy’, ‘logx’ and ‘linear’. If this is not set it will default to ‘linear’.
functionThis defines the function that will be used as a model for the fitting.
Inside FitBenchmarking, this is passed on to the specified software and, as such, the format is specific to the package we wish to use, as described below.
**IVP**
The IVP parser allows a user to define `f` in the following equation:
\[x' = f(t, x, *args)\]
To do this we use a python module to define the function. As in the above formula, the function can take the following arguments:
* *t* (float): The time to evaluate at
* *x* (np.array): A value for x to evaluate at
* **args* (floats): The parameters to fit
To link to this function we use a function string with the following parameters:
* *module*: The path to the module
* *func*: The name of the function within the module
* *step*: The time step that the input data uses
(currently only fixed steps are supported - if you need varying time steps please raise an issue on our GitHub)
* **args*: Starting values for the parameters
**Mantid**
A Mantid function consists of one or more base functions separated by a semicolon.
This allows for a powerful way of describing problems, which may have multiple components such as more than one Gaussian and a linear background.
To use one of the base functions in Mantid, please see the list available
[here](https://docs.mantidproject.org/nightly/fitting/fitfunctions/categories/FitFunctions.html).
*Note: Any non-standard arguments (e.g. ties, constraints, fixes, …) will only work with Mantid fitting software. Using other minimizers to fit these problems will result in the non-standard arguments being ignored.*
**SASView**
SASView functions can be any of
[these](http://www.sasview.org/docs/user/qtgui/Perspectives/Fitting/models/index.html).
**Horace**
The Horace functions are defined here [Horace File Format](index.html#horace-format)
fit_rangesThis specifies the region to be fit.
It takes the form shown in the example, where the first number is the minimum in the range and the second is the maximum.
parameter_rangesAn optional setting which specifies upper and lower bounds for parameters in the problem.
Similarly to `fit_ranges`, it takes the form where the first number is the minimum in the range and the second is the maximum.
Currently in Fitbenchmarking, problems with parameter_ranges can be handled by SciPy, Bumps, Minuit, Mantid, Matlab Optimization Toolbox,
DFO, Levmar and RALFit fitting software. Please note that the following Mantid minimizers currently throw an exception when parameter_ranges are used: BFGS, Conjugate gradient (Fletcher-Reeves imp.),
Conjugate gradient (Polak-Ribiere imp.) and SteepestDescent.
###### Native File Format (Mantid MultiFit)[](#native-file-format-mantid-multifit)
As part of the Mantid parsing we also offer limited support for Mantid’s
[MultiFit](https://docs.mantidproject.org/nightly/algorithms/Fit-v1.html?highlight=fit#multiple-fit)
functionality.
Here we outline how to use Mantid’s MultiFit with FitBenchmarking,
in which some options differ from the standard [Native File Format](index.html#native).
Warning
Due to the way Mantid uses ties (a central feature of MultiFit),
MultiFit problems can only be used with Mantid minimizers.
In this format, data is separated from the function. This allows running the same dataset against multiple different models to assess which is the most appropriate.
An example of a multifit problem is:
```
# FitBenchmark Problem software = 'Mantid'
name = 'MUSR62260'
description = 'Calibration data for mu SR intrument. Run 62260.'
input_file = ['MUSR62260_bkwd.txt','MUSR62260_bottom.txt','MUSR62260_fwd.txt','MUSR62260_top.txt']
function = 'name=FlatBackground,A0=0; name=GausOsc,A=0.2,Sigma=0.2,Frequency=1,Phi=0'
ties = ['f1.Sigma', 'f1.Frequency']
fit_ranges = [{'x': [0.1, 15.0]}, {'x': [0.1, 15.0]}, {'x': [0.1, 15.0]}, {'x': [0.1, 15.0]}]
plot_scale = 'linear'
```
Below we outline the differences between this and the [Native File Format](index.html#native).
softwareMust be Mantid.
nameAs in [Native File Format](index.html#native).
descriptionAs in [Native File Format](index.html#native).
input_fileAs in [Native File Format](index.html#native), but you must pass in a list of data files (see above example).
functionAs in [Native File Format](index.html#native).
When fitting, this function will be used for each of the `input_files` given simultaneously.
tiesThis entry is used to define global variables by tieing a variable across input files.
Each string in the list should reference a parameter in the function using Mantid’s convention of `f<i>.<name>` where `i` is the position of the function in the function string, and `name` is the global parameter.
For example to run a fit which has a shared background and peak height,
the function and ties fields might look like:
```
function='name=LinearBackground, A0=0, A1=0; name=Gaussian, Height=0.01, PeakCentre=0.00037, Sigma=1e-05'
ties=['f0.A0', 'f0.A1', 'f1.Height']
```
fit_rangesAs in [Native File Format](index.html#native).
###### NIST Format[](#nist-format)
The NIST file format is based on the [nonlinear regression](https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml) problems found at the [NIST Standard Reference Database](https://www.itl.nist.gov/div898/strd/). Documentation and background of these problems can be found [here](https://www.itl.nist.gov/div898/strd/general/bkground.html).
We note that FitBenchmarking recognizes the NIST file type by checking the first line of the file starts with # NIST/ITL StRD.
###### Horace File Format[](#horace-file-format)
The Horace file format is based on [Native File Format](index.html#native), this page is intended to demonstrate where the format differs.
Examples of horace problems are:
```
# FitBenchmark Problem software = 'Horace'
name = '1D Gaussian 1'
description = '1D Gaussian (Test example from test_multifit_herbert)'
input_file = 'testdata_multifit_1.mat'
function = 'foreground=m_scripts/functions/mftest_gauss_bkgd_fb_test.m ,height=100,centre=50,sigma=7,const=0,grad=0'
wye_function = 'matlab_script=m_scripts/wye_functions/fb_wye_IX_1D_test1.m'
simulate_function = 'matlab_script=m_scripts/simulate_functions/fb_simulate_IX_1D_test1.m'
```
```
# FitBenchmark Problem software = 'Horace'
name = '3D_Gaussian'
description = '3D Gaussian (Horace\documentation\herbert\example_codes). The data in this example is generated in the wye_function'
input_file = 'dummy.txt'
function = 'foreground=m_scripts/functions/gauss3d_fb_test.m ,height=1100,centre_p1=66 ,centre_p2=1055,centre_p3=117,covmat_p1=15,covmat_p2=3,covmat_p3=5,covmat_p4=30,covmat_p5=-3,covmat_p6=20; background=m_scripts/functions/linear3D_bg_fb_test.m,bkgd_const=15'
wye_function = 'matlab_script=m_scripts/wye_functions/fb_wye_IX_1D_test6.m'
simulate_function = 'matlab_script=m_scripts/simulate_functions/fb_simulate_IX_1D_test6.m'
```
```
# FitBenchmark Problem software = 'Horace'
name = 'PCSMO_at_001_data'
description = 'PCSMO data at 0.0001 data'
input_file = 'pcsmo_ei70_base_bkd.sqw'
function = 'foreground=m_scripts/functions/testfunc_nb_sqw_db_test.m ,JF1=-11.39,JA=1.5,JF2=-1.35,JF3=1.5,Jperp=0.88,D=0.074,en_width=0.1'
wye_function = 'matlab_script=m_scripts/wye_functions/fb_wye_pcsmo_test.m'
simulate_function = 'matlab_script=m_scripts/simulate_functions/fb_simulate_pcsmo_test.m'
```
The Horace file format requires you to have run the benchmark problem in Horace using `fit()` and `simulate()` successfully. Relevant links on how to run this are: [Multifit](https://pace-neutrons.github.io/Horace/unstable/manual/Multifit.html) ,
[Advanced Multifit](https://pace-neutrons.github.io/Horace/unstable/manual/Advanced_Multifit.html)
and [Tobyfit](https://pace-neutrons.github.io/Horace/unstable/manual/Tobyfit.html) problems as well as
[Running Horace in Parallel](https://pace-neutrons.github.io/Horace/unstable/manual/Parallel.html).
As in the native format, an input file must start with a comment indicating that it is a FitBenchmarking problem followed by a number of key value pairs.
Available keys can be seen in [Native File Format](index.html#native) and below:
software, name, descriptionAs described in the native format.
input_fileFor Horace we require a `.sqw` or `.mat` file containing preprocessed, Horace-compatible data.
Note
The `.mat` file is the result of using [save(file, sqw_objects)](https://uk.mathworks.com/help/matlab/ref/save.html)
and should be used if you are loading in multiple `sqw` objects. Make sure to load the data in appropriately in the wye_function.
functionThe function is defined by one or more matlab script (`.m`) files which return a model of the foreground or foreground and background respectively.
The format for defining the function is based on comma-separated key-value pairs, where the Matlab script files are defined by the variables “foreground” and “background”. The remaining pairs define the starting values for each of the models, respectively. It’s important to note that these pairs must be defined after their respective models.
If both the foreground and background models are defined, they should be given as a semicolon-separated list. In this case, there would be two comma-separated key-value pairs, one for the foreground and foreground parameters, and another for the background and background parameters, separated by a semicolon.
Examples:
> Only foreground:
> ```
> function = 'foreground=m_scripts/functions/mftest_gauss_bkgd.m ,height=100,centre=50,sigma=7,const=0,grad=0'
> ```
> Foreground and background:
> ```
> function = 'foreground=m_scripts/functions/gauss.m ,height=1100,centre=66 ,stdev=13; background=m_scripts/functions/linear_bg.m ,bkgd_const=15'
> ```
Note
All parameters must have unique names e.g.
```
function = 'foreground=gauss.m ,height=100,centre=50,sigma=7' ; background=gauss.m ,height_bkgd=100,centre_bkgd=50,sigma_bkgd=7'
```
wye_functionThe wye_function is defined by a matlab file which returns the:
wan `sqw` , `dnd`, `ix_dataset` object or `xye` struct. (see [Multifit](https://pace-neutrons.github.io/Horace/unstable/manual/Multifit.html))
estandard deviation data
yintensity data
msklogical mask array of which elements are to be retained
This function takes the path to the datafile and the path to the matlab functions as arguments.
> Explained example of the wye_function:
> > > The first three lines add the path to matlab functions needed for fitting and loads the w object.
> > > > ```
> addpath(genpath(path));
> source_data = load(datafile);
> w = source_data.w1 ;
> ```
> The next line gets the y and e from the w object. These are the true y, e and msk values from the experiment.
> ```
> [y, e, msk] = sigvar_get(w);
> ```
> Any elements in msk that have a corresponding element in y that is equal to zero will be set to zero.
> The purpose of this is to exclude these elements from subsequent calculations, since they are not informative.
> ```
> msk(y==0) = 0;
> ```
> The last two lines of the wye_function applies the msk to the y and e data. As the e from retrieved above is the
> variance we have taken the square root of the value to get the standard deviation.
> ```
> y = y(msk);
> e = sqrt(e(msk));
> ```
> Examples of the wye_function:
> ```
> function [w, y, e, msk] = fb_wye_IX_1D_test(datafile, path)
> % Gets the w , y, e and msk from the sqw object
> addpath(genpath(path));
> source_data = load(datafile);
> w = source_data.w1 ;
> [y, e, msk] = sigvar_get(w);
> msk(y==0) = 0;
> y = y(msk);
> e = sqrt(e(msk));
> end
> ```
> ```
> function [w, y, e, msk] = fb_wye_pcsmo_test(datafile, path)
> % Gets the w , x, y ,e and msk from the sqw object
> sqw_file = datafile
> addpath(genpath(path));
> ne = 10;
> frac = 1.e-6;
> ei = [25, 35, 50, 70, 100, 140];
> freq = [300, 200, 200, 250, 300, 400];
> proj = ortho_proj([1, 0, 0], [0, 1, 0], 'type', 'rrr');
> sample = IX_sample(true,[0,0,1],[0,1,0],'cuboid',[0.01,0.05,0.01]);
> sample.angdeg = [90 90 90];
> sample.alatt = [3.4 3.4 3.4];
> maps = maps_instrument(ei(4), freq(4), 'S');
> lower_e = ei(4)*0.2; upper_e = ei(4)*0.7;
> ebin = [lower_e, ei(4)/25, upper_e];
> w1 = cut_sqw(sqw_file, proj, [-1, 2/39, 1], [-1, 2/39, 1], [-10, 10], ebin);
> w1 = set_sample(w1,sample);
> w1 = set_instrument(w1,maps);
> w1 = mask_random_fraction_pixels(w1, 0.1);
> [y, e, msk] = sigvar_get(w1);
> msk(y==0) = 0;
> y = y(msk);
> e = sqrt(e(msk));
> w = w1;
> ```
simulate_functionThe simulate_function is defined by a matlab file which returns the derived y values from simulate() for the fitting function.
This matlab file takes in the w, fitpars (fitting parameters) and msk. The w and msk are the same as the wye_function. The fitpars are determined by the current minimizer.
Explained Example of the simulate_function:
```
forefunc = @mftest_gauss_bkgd;
mf = multifit(w);
mf = mf.set_fun(forefunc);
mf = mf.set_pin(fitpars);
[wout,fitpar] = mf.simulate();
[y, e] = sigvar_get(wout);
y=y(msk);
```
Note
If the benchmark problem uses random numbers in any way (e.g. tobyfit).
A persisent seed needs to be set before simulate is run.
This makes sure that it uses the same seed everytime `simulate()` is ran.
```
persistent seed if isempty (seed)
rng(3,"twister");
seed = rng();
else
rng(seed);
end
```
Examples of the simulate_function:
```
function y = fb_simulate_IX_1D_test1(w,fitpars,msk)
% simulate loop to solve for the parameters
forefunc = @mftest_gauss_bkgd_fb_test;
mf = multifit(w);
mf = mf.set_fun(forefunc);
mf = mf.set_pin(fitpars);
[wout,fitpar] = mf.simulate();
[y, e] = sigvar_get(wout);
y=y(msk);
end
```
```
function y = fb_simulate_pcsmo_test(w,fitpars,msk)
%simulate loop to solve for the parameters
persistent seed if isempty (seed)
rng(3,"twister");
seed = rng();
else
rng(seed);
end
JF1 = -11.39; JA = 1.5; JF2 = -1.35; JF3 = 1.5; Jperp = 0.88; D = 0.074; en_width = 0.1;
frac = 1.e-6;
sw_obj = pcsmo_fb_test(JF1, JA, JF2, JF3, Jperp, D);
%fitpars = [JF1 JA JF2 JF3 Jperp D en_width];
cpars = {fitpars 'mat', {'JF1', 'JA', 'JF2', 'JF3', 'Jperp', 'D(3,3)'}, ...
'hermit', false, 'optmem', 0, 'useFast', false, 'formfact', ...
true,'resfun', 'gauss', 'coordtrans', diag([2 2 1 1]), ...
'use_brille', true, 'node_volume_fraction', frac, ...
'use_vectors', false, 'Qtrans', diag([1./4 1./4 1.])};
tbf = tobyfit(w);
tbf = tbf.set_fun(@sw_obj.horace_sqw, {cpars{:}});
tbf = tbf.set_mc_points(5);
[fit_data , fit_pars] = tbf.simulate();
[y, e, ~] = sigvar_get(fit_data);
y=y(msk);
end
```
Note
All the functions needed in the fitting must be in the subdirectory of the benchmark problem.
Note
If you have a non standard installation of Horace please set the HORACE_LOCATION and the SPINW_LOCATION as environment variables(e.g on IDAaaS).
Note
Horace Problems currently does not support plotting. Please set `make_plots: no` in the options file.
###### Detecting problem file type[](#detecting-problem-file-type)
FitBenchmarking detects which parser to use in two ways:
> * For the CUTEst file format we check that the extension of the data file is sif
> * For native and NIST file formats we check the first line of the file
> > > > + # FitBenchmark Problem corresponds to the native format
> > + # NIST/ITL StRD corresponds to the NIST format
> >
##### Checkpointing in FitBenchmarking[](#checkpointing-in-fitbenchmarking)
In some cases, fitting can take a long time and rerunning the fits to change output options is inefficient. For this situation we provide a checkpointing file.
###### Using the checkpointing feature[](#using-the-checkpointing-feature)
As indicated above this feature is currently only for rendering changes to the presentation of runs although we plan to extend it to combining and filtering runs in the future.
By default, when running FitBenchmarking it will create a `checkpoint` file in the results directory which will contain all the information required to create output tables and plots.
To generate new reports for an existing checkpoint, use the
`--load_checkpoint` option:
```
fitbenchmarking --load_checkpoint
```
This can use the same options file as in the original run but with the output changed, or seperate one as long as the results directory and checkpointing file are the same.
There is also a seperate tool for working with checkpoint files
`fitbenchmarking-cp` that can be used to regenerate the reports.
```
fitbenchmarking-cp report --help
```
###### Warnings[](#warnings)
Using `--load_checkpoint` will not re-run any results or run any new combinations that have been added to the options file.
This command also does not check that the checkpoint file has not been edited manually. Manual editing may be desired for removing or combining data while these features are developed but should be done with caution to ensure results are still comparable.
##### FitBenchmarking Tests[](#fitbenchmarking-tests)
The tests for FitBenchmarking require `pytest>=3.6`. We have split the tests into three categories:
* `default`: denotes tests involving `pip` installable
[software packages](index.html#getting-started),
* `all`: in addition to `default`, also runs tests on
[external packages](index.html#external-instructions), with the exception of matlab.
* `matlab`: Runs tests for matlab fitting software. Please note that these tests can currently only be run locally through pytest.
###### Unit tests[](#unit-tests)
Each module directory in FitBenchmarking (e.g. `controllers`) contains a test folder which has the `unit` tests for that module.
One can run the tests for a module by:
```
pytest fitbenchmarking/<MODULE_DIR> --test-type <TEST_TYPE>
```
where <TEST_TYPE> is either `default` or `all`.
If `--test-type` argument is not given the default is `all`
###### System tests[](#system-tests)
System tests can be found in the `systests` directory in FitBenchmarking.
As with the unit tests, these can be run via:
```
pytest fitbenchmarking/systests --test-type <TEST_TYPE>
```
Warning
The files in the expected results subdirectory of the `systests`
directory are generated to check consistency in our automated tests via
[GitHub Actions](https://github.com/fitbenchmarking/fitbenchmarking/actions).
They might not pass on your local operating system due to, for example,
different software package versions being installed.
###### GitHub Actions tests[](#github-actions-tests)
The scripts that are used for our automated tests via
[GitHub Actions](https://github.com/fitbenchmarking/fitbenchmarking/actions)
are located in the `ci` folder.
These give an example of how to run both the unit and system tests within FitBenchmarking.
##### Known Issues[](#known-issues)
This page is used to detail any known issues or unexpected behaviour within the software.
###### Problem-Format/Software Combinations[](#problem-format-software-combinations)
When comparing minimizer options from one software package
(e.g., comparing all scipy_ls minimizers), we are not aware of any issues.
However, the general problem of comparing minimizers from multiple software packages, and with different problem-formats, on truly equal terms is harder to achieve.
The following list details all cases where we are aware of a possible bias:
* **Using native FitBenchmarking problems with the Mantid software and fitting using Mantid.**
With Mantid data, the function evaluation is slightly faster for Mantid minimizers than for all other minimizers. You should account for this when interpreting the results obtained in this case.
* **Using non-scalar ties in native FitBenchmarking problems with the Mantid software.**
Mantid allows parameters to be tied to expressions - e.g. X0=5.0 or X0=X1*2.
While scalar ties are now supported for all minimizers the more complicated expressions are not supported. If you need this feature please get in touch with the development team with your use case.
* **Running Mantid problems with Matlab fitting software.**
To run problems with Matlab fitting software through FitBenchmarking, within the Matlab Controller the dynamically created cost_func.eval_model function is serialized and then loaded in the Matlab Engine workspace. However for Mantid problems, this function is not picklable resulting in the problem being skipped over.
In all cases, the stopping criterion of each minimizer is set to the default value.
An experienced user can change this.
###### Specific Problem/Minimizer Combinations[](#specific-problem-minimizer-combinations)
* **CrystalField Example with Mantid - DampedGaussNewton Minimizer.**
With this combination, GSL is known to crash during Mantid’s fitting.
This causes python to exit without completing any remaining runs or generating output files.
More information may be available via
[the issue on Mantid’s github page](https://github.com/mantidproject/mantid/issues/31176).
##### FitBenchmarking tutorial videos[](#fitbenchmarking-tutorial-videos)
On this page you will find some short tutorial videos on how to use FitBenchmarking.
###### Interpreting FitBenchmarking results[](#interpreting-fitbenchmarking-results)
The following video explains how to interpret FitBenchmarking results.
###### Running FitBenchmarking[](#running-fitbenchmarking)
The following video explains how to run FitBenchmarking.
####### Useful links:[](#useful-links)
[www.python.org/downloads/](https://www.python.org/downloads/)
####### Code demonstrated in this video:[](#code-demonstrated-in-this-video)
```
python -m pip install fitbenchmarking[bumps,DFO,gradient_free,minuit,SAS,numdifftools]
```
```
fitbenchmarking
```
```
fitbenchmarking -p examples/benchmark_problems/NIST/low_difficulty
```
```
fitbenchmarking -o examples/options_template.ini
```
```
fitbenchmarking -r new_results/
```
```
fitbenchmarking -t acc runtime
```
```
fitbenchmarking -t acc -l WARNING
```
###### Choosing your options[](#choosing-your-options)
The following video explains how to choose the best cost function / software / minimizer / Jacobian / Hessian for your data.
#### Extending FitBenchmarking Documentation[](#extending-fitbenchmarking-documentation)
FitBenchmarking is designed to be easily extendable to add new features to the software. Below we outline instructions for doing this.
##### Adding Fitting Problem Definition Types[](#adding-fitting-problem-definition-types)
The problem definition types we currently support are listed in the page [Problem Definition Files](index.html#problem-def).
To add a new fitting problem type, the parser name must be derived from the file to be parsed.
For current file formats by including it as the first line in the file. e.g `# Fitbenchmark Problem` or `NIST/ITL StRD`, or by checking the file extension.
To add a new fitting problem definition type, complete the following steps:
1. Give the format a name (`<format_name>`).
This should be a single word or string of alphanumeric characters,
and must be unique ignoring case.
2. Create a parser in the `fitbenchmarking/parsing` directory.
This parser must satisfy the following:
* The filename should be of the form `"<format_name>_parser.py"`
* The parser must be a subclass of the base parser, [`Parser`](index.html#fitbenchmarking.parsing.base_parser.Parser)
* The parser must implement `parse(self)` method which takes only `self`
and returns a populated [`FittingProblem`](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem)Note: File opening and closing is handled automatically.
3. If the format is unable to accommodate the current convention of starting with the `<format_name>`, you will need to edit
[`ParserFactory`](index.html#fitbenchmarking.parsing.parser_factory.ParserFactory).
This should be done in such a way that the type is inferred from the file.
4. Document the parser (see [Problem Definition Files](index.html#problem-def)), being sure to include any licencing information.
5. Create the files to test the new parser.
Automated tests are run against the parsers in FitBenchmarking,
which work by using test files in
`fitbenchmarking/parsing/tests/<format_name>`.
In the `test_parsers.generate_test_cases()` function,
one needs to add the new parser’s name to the variable `formats`,
based on whether or not the parser is `pip` installable.
There are 2 types of test files needed:
* **Generic tests**: `fitbenchmarking/parsing/tests/expected/` contains
two files, `basic.json` and `start_end_x.json`.
You must write two input files in the new file format,
which will be parsed using the new parser to check that the entries
in the generated fitting problem match the values expected.
These must be called `basic.<ext>`, `start_end_x.<ext>`, where `<ext>`
is the extension of the new file format, and they must be placed in
`fitbenchmarking/parsing/tests/<format_name>/`.
* **Function tests**: A file named `function_evaluations.json`
must also be provided in
`fitbenchmarking/parsing/tests/<format_name>/`, which tests that the
function evaluation behaves as expected. This file must be in json format and
contain a string of the form:
```
{"file_name1": [[[x11,x12,...,x1n], [param11, param12,...,param1m], [result11,result12,...,result1n]],
[[x21,x22,...,x2n], [param21, param22,...,param2m], [result21,result22,...,result2n]],
...],
{"file_name2": [...],
...}
```
The test will then parse the files `file_name<x>` in turn evaluate the function
at the given `xx` values and `params`. If the result is not suitably close to
the specified value the test will fail.
* **Jacobian tests**: *If the parser you add has analytic Jacobian
information*, then in `test_parsers.py` add
`<format_name>` to the `JACOBIAN_ENABLED_PARSERS` global variable.
Then add a file `jacobian_evaluations.json` to
`fitbenchmarking/parsing/tests/<format_name>/`, which tests that the Jacobian evaluation behaves as expected.
This file should have the same file structure as function_evaluations.json,
and works in a similar way.
* **Hessian tests**: *If the parser you add has analytic Hessian
information*, then in `test_parsers.py` add
`<format_name>` to the `HESSIAN_ENABLED_PARSERS` global variable.
Then add a file `hessian_evaluations.json` to
`fitbenchmarking/parsing/tests/<format_name>/`, which tests that the Hessian evaluation behaves as expected.
This file should have the same file structure as function_evaluations.json,
and works in a similar way.
* **Integration tests**: Add an example to the directory
`fitbenchmarking/test_files/all_parser_set/`.
This will be used to verify that the problem can be run by scipy, and that
accuracy results do not change unexpectedly in future updates.
If the software used for the new parser is pip-installable, and the
installation is done via FitBenchmarking’s `setup.py`, then add the
same example to `fitbenchmarking/test_files/default_parsers_set/`.
As part of this, the `systests/expected_results/all_parsers.csv` file,and if necessary the `systests/expected_results/default_parsers_set.csv` file,
will need to be updated. This is done by running the systests:
```
pytest fitbenchmarking/systests
```
and then checking that the only difference between the results table and the
expected value is the new problem, and updating the expected file with the result.
6. Verify that your tests have been found and are successful by running pytest -vv fitbenchmarking/parsing/tests/test_parsers.py
Once the new parser is added, please add some examples that use this problem definition type following the instructions at [Adding More Data](index.html#adding-data).
##### Adding More Data[](#adding-more-data)
We encourage users to contribute more data sets to FitBenchmarking;
the more data we have available to test against, the more useful FitBenchmarking is. First, please ensure that there is a
[parser](index.html#problem-def) available that can read your dataset, and if necessary follow the [instructions to add this functionality](index.html#parsers) to FitBenchmarking. Once this is done,
follow the steps below and add to a pull request to make the data available to others.
1. Create a directory that contains:
* the data sets to be included
* a file META.txt containing metadata about the dataset.
* a subfolder data_files which contains any supplemental dataneeded by the data parser. We particularly encourage analytic derivative information, if available.
2. Update the [Benchmark problems](index.html#benchmarkproblems) page to include a description of the dataset. As well as information about the source of the data, this should include:
* information about how many parameters and how many data pointsare to be fitted in the dataset
* details of any external software that needs to be installed to load thesedatasets.
3. Create zip and tar.gz archives of these directories, and pass along to one of the core developers to put on the webspace. They will pass you a location of the dataset to include in the description page, and update the folder containing all examples to contain your data set.
4. If the data is to be distributed with the GitHub source, add the directory to the examples/benchmark_problems folder and commit to the repository. Please note that the maximum size of a file supported by GitHub is 100MB, and so datasets with files larger than this cannot be added to the repository. Also, GitHub recommends that the size of a Git repository is not more than 1GB.
##### Adding new cost functions[](#adding-new-cost-functions)
*This section describes how to add cost functions to benchmarking in FitBenchmarking*
In order to add a new cost function, `<cost_func>`,
you will need to:
1. Create `fitbenchmarking/cost_func/<cost_func>_cost_func.py`,
which contains a new subclass of
[`CostFunc`](index.html#fitbenchmarking.cost_func.base_cost_func.CostFunc).
Then implement the methods:
> * *abstract* CostFunc.eval_cost()
> Evaluate the cost function
> Parameters
> **params** (*list*) – The parameters to calculate residuals for
> Returns
> evaluated cost function
> Return type
> float
> * *abstract* CostFunc.jac_res()
> Uses the Jacobian of the model to evaluate the Jacobian of the
> cost function residual, \(\nabla_p r(x,y,p)\), at the
> given parameters.
> Parameters
> **params** (*list*) – The parameters at which to calculate Jacobians
> Returns
> evaluated Jacobian of the residual at each x, y pair
> Return type
> a list of 1D numpy arrays
> * *abstract* CostFunc.jac_cost()
> Uses the Jacobian of the model to evaluate the Jacobian of the
> cost function, \(\nabla_p F(r(x,y,p))\), at the given
> parameters.
> Parameters
> **params** (*list*) – The parameters at which to calculate Jacobians
> Returns
> evaluated Jacobian of the cost function
> Return type
> 1D numpy array
> * *abstract* CostFunc.hes_res()
> Uses the Hessian of the model to evaluate the Hessian of the
> cost function residual, \(\nabla_p^2 r(x,y,p)\), at the
> given parameters.
> Parameters
> **params** (*list*) – The parameters at which to calculate Hessians
> Returns
> evaluated Hessian and Jacobian of the residual at
> each x, y pair
> Return type
> tuple (list of 2D numpy arrays, list of 1D numpy arrays)
> * *abstract* CostFunc.hes_cost()
> Uses the Hessian of the model to evaluate the Hessian of the
> cost function, \(\nabla_p^2 F(r(x,y,p))\), at the given
> parameters.
> Parameters
> **params** (*list*) – The parameters at which to calculate Hessians
> Returns
> evaluated Hessian of the cost function
> Return type
> 2D numpy array
>
2. Document the available cost functions by:
> * adding `<cost_func>` to the `cost_func_type` option in [Fitting Options](index.html#fitting-option).
> * updating any example files in the `examples` directory
> * adding the new cost function to the [Cost functions](index.html#cost-func) user docs.
3. Create tests for the cost function in
`fitbenchmarking/cost_func/tests/test_cost_func.py`.
###### The [`FittingProblem`](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem) and [`CostFunc`](index.html#fitbenchmarking.cost_func.base_cost_func.CostFunc) classes[](#the-fittingproblem-and-costfunc-classes)
When adding new cost functions, you will find it helpful to make use of the following members of the [`FittingProblem`](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem)
class.
*class* fitbenchmarking.parsing.fitting_problem.FittingProblem(*options*)
Definition of a fitting problem, which will be populated by a parser from a problem definition file.
Onces populated, this should include the data, the function and any other additional requirements from the data.
data_e
*numpy array* The errors or weights
data_x
*numpy array* The x-data
data_y
*numpy array* The y-data
eval_model(*params*, ***kwargs*)
Function evaluation method
Parameters
**params** (*list*) – parameter value(s)
Returns data values evaluated from the function of the problem
Return type numpy array
You will also find it useful to implement the subclass members of
[`CostFunc`](index.html#fitbenchmarking.cost_func.base_cost_func.CostFunc),
[`Jacobian`](index.html#fitbenchmarking.jacobian.base_jacobian.Jacobian) and
[`Hessian`](index.html#fitbenchmarking.hessian.base_hessian.Hessian).
*class* fitbenchmarking.cost_func.base_cost_func.CostFunc(*problem*)
Base class for the cost functions.
*abstract* eval_cost(*params*, ***kwargs*)
Evaluate the cost function
Parameters
**params** (*list*) – The parameters to calculate residuals for
Returns evaluated cost function
Return type float
*abstract* hes_cost(*params*, ***kwargs*)
Uses the Hessian of the model to evaluate the Hessian of the cost function, \(\nabla_p^2 F(r(x,y,p))\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Hessians
Returns evaluated Hessian of the cost function
Return type 2D numpy array
*abstract* hes_res(*params*, ***kwargs*)
Uses the Hessian of the model to evaluate the Hessian of the cost function residual, \(\nabla_p^2 r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Hessians
Returns evaluated Hessian and Jacobian of the residual at each x, y pair
Return type tuple (list of 2D numpy arrays, list of 1D numpy arrays)
*abstract* jac_cost(*params*, ***kwargs*)
Uses the Jacobian of the model to evaluate the Jacobian of the cost function, \(\nabla_p F(r(x,y,p))\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Jacobians
Returns evaluated Jacobian of the cost function
Return type 1D numpy array
*abstract* jac_res(*params*, ***kwargs*)
Uses the Jacobian of the model to evaluate the Jacobian of the cost function residual, \(\nabla_p r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Jacobians
Returns evaluated Jacobian of the residual at each x, y pair
Return type a list of 1D numpy arrays
*class* fitbenchmarking.jacobian.base_jacobian.Jacobian(*problem*)
Base class for Jacobian.
*abstract* eval(*params*, ***kwargs*)
Evaluates Jacobian of the model, \(\nabla_p f(x,p)\),
at the point given by the parameters.
Parameters
**params** (*list*) – The parameter values at which to evaluate the Jacobian
Returns Computed Jacobian
Return type numpy array
*class* fitbenchmarking.hessian.base_hessian.Hessian(*problem*, *jacobian*)
Base class for Hessian.
*abstract* eval(*params*, ***kwargs*)
Evaluates Hessian of the model
Parameters
**params** (*list*) – The parameter values to find the Hessian at
Returns Computed Hessian
Return type 3D numpy array
##### Adding new Hessians[](#adding-new-hessians)
*This section describes how to add further methods to approximate the Hessian within FitBenchmarking*
In order to add a new Hessian evaluation method, `<hes_method>`,
you will need to:
1. Create `fitbenchmarking/hessian/<hes_method>_hessian.py`,
which contains a new subclass of
`hessian`.
Then implement the method:
> * *abstract* Hessian.eval()
> Evaluates Hessian of the model
> Parameters
> **params** (*list*) – The parameter values to find the Hessian at
> Returns
> Computed Hessian
> Return type
> 3D numpy array
The method is set sequentially within
`loop_over_hessians()` by using the `method` attribute of the class.
2. Enable the new method as an option in [Fitting Options](index.html#fitting-option),
following the instructions in [Adding new Options](index.html#options-extend). Specifically:
* Amend the `VALID_FITTING` dictionary so that the element associated
with the `hes_method` key contains the new `<hes_method>`.
3. Document the available Hessians by:
> * adding to the list of available `method` options under `hes_method` in [Fitting Options](index.html#fitting-option).
> * updating any example files in the `examples` directory
4. Create tests for the Hessian evaluation in
`fitbenchmarking/hessian/tests/test_hessians.py`.
###### The [`FittingProblem`](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem) and [`Jacobian`](index.html#fitbenchmarking.jacobian.base_jacobian.Jacobian)[](#the-fittingproblem-and-jacobian)
When adding new Hessian, you will find it helpful to make use of the following members of the [`FittingProblem`](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem)
and subclasses of [`Jacobian`](index.html#fitbenchmarking.jacobian.base_jacobian.Jacobian):
*class* fitbenchmarking.parsing.fitting_problem.FittingProblem(*options*)
Definition of a fitting problem, which will be populated by a parser from a problem definition file.
Onces populated, this should include the data, the function and any other additional requirements from the data.
data_e
*numpy array* The errors or weights
data_x
*numpy array* The x-data
data_y
*numpy array* The y-data
eval_model(*params*, ***kwargs*)
Function evaluation method
Parameters
**params** (*list*) – parameter value(s)
Returns data values evaluated from the function of the problem
Return type numpy array
*class* fitbenchmarking.jacobian.base_jacobian.Jacobian(*problem*)
Base class for Jacobian.
*abstract* eval(*params*, ***kwargs*)
Evaluates Jacobian of the model, \(\nabla_p f(x,p)\),
at the point given by the parameters.
Parameters
**params** (*list*) – The parameter values at which to evaluate the Jacobian
Returns Computed Jacobian
Return type numpy array
##### Adding new Jacobians[](#adding-new-jacobians)
*FitBenchmarking allows the use of custom methods for evaluating the Jacobian of the model. This section describes how to add further methods within FitBenchmarking*
In order to add a new Jacobian evaluation method, `<jac_method>`,
you will need to:
1. Create `fitbenchmarking/jacobian/<jac_method>_jacobian.py`,
which contains a new subclass of
[`Jacobian`](index.html#fitbenchmarking.jacobian.base_jacobian.Jacobian).
Then implement the method:
> * *abstract* Jacobian.eval()
> Evaluates Jacobian of the model, \(\nabla_p f(x,p)\),
> at the point given by the parameters.
> Parameters
> **params** (*list*) – The parameter values at which to evaluate the Jacobian
> Returns
> Computed Jacobian
> Return type
> numpy array
The numerical method is set sequentially within
`loop_over_jacobians()` by using the `method` attribute of the class.
2. Enable the new method as an option in [Fitting Options](index.html#fitting-option),
following the instructions in [Adding new Options](index.html#options-extend). Specifically:
* Amend the `VALID_FITTING` dictionary so that the element associated
with the `jac_method` key contains the new `<jac_method>`.
* Extend the `VALID_JACOBIAN` dictionary to have a new
key `<jac_method>`, with the associated element being a list of
valid options for this Jacobian.
* Extend the `DEFAULT_JACOBIAN` dictionary to have a new key
`<jac_method>`, with the associated element being a subset of the
valid options added in `VALID_JACOBIAN` in the previous step.
* Amend the file `fitbenchmarking/utils/tests/test_options_jacobian.py` to
include tests for the new options.
3. Document the available Jacobians by:
> * adding a list of available `method` options to the docs for [Jacobian Options](index.html#jacobian-option),
> and including licencing information, if appropriate.
> * updating any example files in the `examples` directory
4. Create tests for the Jacobian evaluation in
`fitbenchmarking/jacobian/tests/test_jacobians.py`.
###### The [`FittingProblem`](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem) class[](#the-fittingproblem-class)
When adding new Jacobian, you will find it helpful to make use of the following members of the [`FittingProblem`](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem)
class:
*class* fitbenchmarking.parsing.fitting_problem.FittingProblem(*options*)
Definition of a fitting problem, which will be populated by a parser from a problem definition file.
Onces populated, this should include the data, the function and any other additional requirements from the data.
data_e
*numpy array* The errors or weights
data_x
*numpy array* The x-data
data_y
*numpy array* The y-data
eval_model(*params*, ***kwargs*)
Function evaluation method
Parameters
**params** (*list*) – parameter value(s)
Returns data values evaluated from the function of the problem
Return type numpy array
##### Adding Fitting Software[](#adding-fitting-software)
Controllers are used to interface FitBenchmarking with the various fitting packages. Controllers are responsible for converting the problem into a format that the fitting software can use, and converting the result back to a standardised format (numpy arrays). As well as this, the controller must be written so that the fitting is separated from the preparation wherever possible in order to give accurate timings for the fitting. Supported controllers are found in `fitbenchmarking/controllers/`.
In order to add a new controller, you will need to:
1. Give the software a name `<software_name>`. This will be used by users when selecting this software.
2. Create `fitbenchmarking/controllers/<software_name>_controller.py`
which contains a new subclass of
[`Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller).
Note
Please note that if the fitting package being added uses Matlab, then the new controller should also inherit from the mixin class
[`MatlabMixin`](index.html#fitbenchmarking.controllers.matlab_mixin.MatlabMixin).
*class* fitbenchmarking.controllers.matlab_mixin.MatlabMixin(*cost_func*)
Mixin class for matlab fitting software controllers
py_to_mat(*func*)
Get the named function from the matlab version of the cost function
Parameters
**func** (*str*) – The name of the function to retrieve
The new controller should implement four functions, as well as initializing the dictionary `algorithm_check`:
> * Controller.algorithm_check *= {'all': [], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': [], 'gauss_newton': [], 'general': [], 'global_optimization': [], 'levenberg-marquardt': [], 'ls': [], 'simplex': [], 'steepest_descent': [], 'trust_region': []}*
> Within the controller class, you must
> initialize a dictionary, `algorithm_check`,
> such that the **keys** are given by:
> > > > + `all` - all minimizers
> > + `ls` - least-squares fitting algorithms
> > + `deriv_free` - derivative free algorithms (these are algorithms
> > that cannot use information about derivatives – e.g., the
> > `Simplex` method in `Mantid`)
> > + `general` - minimizers which solve a generic min f(x)
> > + `simplex` - derivative free simplex based algorithms
> > e.g. Nelder-Mead
> > + `trust_region` - algorithms which emply a trust region approach
> > + `levenberg-marquardt` - minimizers that use the
> > Levenberg-Marquardt algorithm
> > + `gauss_newton` - minimizers that use the Gauss Newton algorithm
> > + `bfgs` - minimizers that use the BFGS algorithm
> > + `conjugate_gradient` - Conjugate Gradient algorithms
> > + `steepest_descent` - Steepest Descent algorithms
> > + `global_optimization` - Global Optimization algorithms
> > The **values** of the dictionary are given as a list of minimizers
> for that specific controller that fit into each of the above
> categories. See for example the `GSL` controller.
> * Controller.__init__()
> Initialise anything that is needed specifically for the
> software, do any work that can be done without knowledge of the
> minimizer to use, or function to fit, and call
> `super(<software_name>Controller, self).__init__(problem)`
> (the base class’s `__init__` implementation).
> Parameters
> **cost_func** (subclass of
> [`CostFunc`](index.html#fitbenchmarking.cost_func.base_cost_func.CostFunc)) – Cost function object selected from options.
> * *abstract* Controller.setup()
> Setup the specifics of the fitting.
> Anything needed for “fit” that can only be done after knowing the
> minimizer to use and the function to fit should be done here.
> Any variables needed should be saved to self (as class attributes).
> If a solver supports bounded problems, then this is where
> value_ranges should be set up for that specific solver. The default
> format is a list of tuples containing the lower and upper bounds
> for each parameter e.g. [(p1_lb, p2_ub), (p2_lb, p2_ub),…]
> * *abstract* Controller.fit()
> Run the fitting.
> This will be timed so should include only what is needed
> to fit the data.
> * *abstract* Controller.cleanup()
> Retrieve the result as a numpy array and store results.
> Convert the fitted parameters into a numpy array, saved to
> `self.final_params`, and store the error flag as `self.flag`.
> The flag corresponds to the following messages:
> flag()
> 0: Successfully converged
> 1: Software reported maximum number of iterations exceeded
> 2: Software run but didn’t converge to solution
> 3: Software raised an exception
> 4: Solver doesn’t support bounded problems
> 5: Solution doesn’t respect parameter bounds
> 6: Solver has exceeded maximum allowed runtime
> 7: Validation of the provided options failed
> > > By default, a controller does not accept Jacobian or Hessian information. If the controller
> > being added can use hessians and/or jacobians, then the following controller attributes should
> > be set:
> > > > * Controller.jacobian_enabled_solvers *= []*
> > Within the controller class, you must define the list
> > `jacobian_enabled_solvers` if any of the minimizers
> > for the specific software are able to use jacobian
> > information.
> > > > > + `jacobian_enabled_solvers`: a list of minimizers in a specific
> > software that allow Jacobian information to be passed
> > into the fitting algorithm
> > * Controller.hessian_enabled_solvers *= []*
> > Within the controller class, you must define the list
> > `hessian_enabled_solvers` if any of the minimizers
> > for the specific software are able to use hessian
> > information.
> > > > > + `hessian_enabled_solvers`: a list of minimizers in a specific
> > software that allow Hessian information to be passed
> > into the fitting algorithm
> > > 3. Add the new software to the default options, following the instructions in
[Adding new Options](index.html#options-extend).
Your new software is now fully hooked in with FitBenchmarking, and you can compare it with the current software. You are encouraged to contribute this to the repository so that other can use this package. To do this need to follow our
[Coding Standards](index.html#guidelines) and our [Git Workflow](index.html#workflow), and you’ll also need to
4. Document the available minimizers (see [Fitting Options](index.html#fitting-option), [Minimizer Options](index.html#minimizer-option)),
including licencing information, if appropriate.
Note: make sure that you use `<software_name>` in these places so that the software links in the HTML tables link correctly to the documentation.
Add the software to `examples/all_software.ini`.
You should also ensure that the available minimizers are catagorised correctly in `self.algorithm_check`
using the [algorithm type](index.html#algorithm-type) options. Please refer to the [Optimization Algorithms](index.html#algorithms)
page for more information about each algorithm type.
5. Create tests for the software in
`fitbenchmarking/controllers/tests/test_controllers.py`. If the package is `pip` installable then add the tests to the `DefaultControllerTests` class and if not add to the `ExternalControllerTests` class.
Unless the new controller is more complicated than the currently available controllers, this can be done by following the example of the others.
6. If the software is deterministic, add the software to the regression tests in
`fitbenchmarking/systests/test_regression.py`.
7. If pip installable add to `install_requires` in `setup.py` and add to the installation step in `.github/workflows/release.yml`.
If not, document the installation procedure in [Installing External Software](index.html#external-instructions)
and update the `FullInstall` Docker Container – the main developers will help you with this.
Note
For ease of maintenance, please add new controllers to a list of software in alphabetical order.
###### The [`FittingProblem`](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem), [`CostFunc`](index.html#fitbenchmarking.cost_func.base_cost_func.CostFunc) and [`Jacobian`](index.html#fitbenchmarking.jacobian.base_jacobian.Jacobian) classes[](#the-fittingproblem-costfunc-and-jacobian-classes)
When adding new minimizers, you will find it helpful to make use of the following members of the
[`FittingProblem`](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem), subclasses of
[`CostFunc`](index.html#fitbenchmarking.cost_func.base_cost_func.CostFunc) and subclasses of
[`Jacobian`](index.html#fitbenchmarking.jacobian.base_jacobian.Jacobian) classes:
*class* fitbenchmarking.parsing.fitting_problem.FittingProblem(*options*)
Definition of a fitting problem, which will be populated by a parser from a problem definition file.
Onces populated, this should include the data, the function and any other additional requirements from the data.
data_e
*numpy array* The errors or weights
data_x
*numpy array* The x-data
data_y
*numpy array* The y-data
eval_model(*params*, ***kwargs*)
Function evaluation method
Parameters
**params** (*list*) – parameter value(s)
Returns data values evaluated from the function of the problem
Return type numpy array
set_value_ranges(*value_ranges*)
Function to format parameter bounds before passing to controllers,
so self.value_ranges is a list of tuples, which contain lower and upper bounds (lb,ub) for each parameter in the problem
Parameters
**value_ranges** (*dict*) –
dictionary of bounded parameter names withlower and upper bound values e.g.
`{p1_name: [p1_min, p1_max], ...}`
*class* fitbenchmarking.cost_func.base_cost_func.CostFunc(*problem*)
Base class for the cost functions.
*abstract* eval_cost(*params*, ***kwargs*)
Evaluate the cost function
Parameters
**params** (*list*) – The parameters to calculate residuals for
Returns evaluated cost function
Return type float
*class* fitbenchmarking.jacobian.base_jacobian.Jacobian(*problem*)
Base class for Jacobian.
*abstract* eval(*params*, ***kwargs*)
Evaluates Jacobian of the model, \(\nabla_p f(x,p)\),
at the point given by the parameters.
Parameters
**params** (*list*) – The parameter values at which to evaluate the Jacobian
Returns Computed Jacobian
Return type numpy array
##### Adding new Options[](#adding-new-options)
Default options are set by the class
[`Options`](index.html#fitbenchmarking.utils.options.Options), which is defined in the file fitbenchmarking/utils/options.py.
The options used can be changed using an .ini formatted file
([see here](https://docs.python.org/3/library/configparser.html#supported-ini-file-structure)). [FitBenchmarking Options](index.html#options) gives examples of how this is currently implemented in FitBenchmarking.
To add a new option to one of the five sections `FITTING`, `MINIMIZERS`,
`JACOBIAN`, `OUTPUT` and `LOGGING`, follow the steps below.
We’ll illustrate the steps using `<SECTION>`, which could be any of the sections above.
1. Amend the dictionary `DEFAULT_<SECTION>` in [`Options`](index.html#fitbenchmarking.utils.options.Options) to include any new default options.
2. If the option amended is to be checked for validity, add accepted option values to the `VALID_<SECTION>` dictionary in [`Options`](index.html#fitbenchmarking.utils.options.Options).
3. Using the [`read_value()`](index.html#fitbenchmarking.utils.options.Options.read_value) function,
add your new option to the class, following the examples already in
[`Options`](index.html#fitbenchmarking.utils.options.Options). The syntax of this function is:
Options.read_value(*func*, *option*, *additional_options*)
Helper function which loads in the value
Parameters
* **func** (*callable*) – configparser function
* **option** (*str*) – option to be read for file
* **additional_options** – A dictionary of options
input by the user into the command line.
:type additional_options: dict
Returns value of the option
Return type list/str/int/bool 4. Add tests in the following way:
> * Each of the sections has it’s own test file, for example,
> `test_option_fitting` has tests for the `FITTING` section.
> * Add default tests to the class called `<SECTION>OptionTests`.
> * Add user defined tests to the class called `User<SECTION>OptionTests`. These
> should check that the user added option is valid and raise an `OptionsError`
> if not.
>
5. Add relevant documentation for the new option in [FitBenchmarking Options](index.html#options).
Adding new Sections is also possible. To do this you’ll need to extend
`VALID_SECTIONS` with the new section, and follow the same structure as the other SECTIONS.
##### Amending FitBenchmarking Outputs[](#amending-fitbenchmarking-outputs)
Here we describe how to add ways of displaying results obtained by FitBenchmarking.
###### Adding further Tables[](#adding-further-tables)
The tables that are currently supported are listed in [FitBenchmarking Output](index.html#output).
In order to add a new table, you will need to:
1. Give the table a name `<table_name>`. This will be used by users when selecting this output from FitBenchmarking.
2. Create `fitbenchmarking/results_processing/<table_name>_table.py`
which contains a new subclass of
[`Table`](index.html#fitbenchmarking.results_processing.base_table.Table).
The main functions to change are:
* *abstract* Table.get_value(*result*)
Gets the main value to be reported in the tables for a given result
If more than one value is returned please note that the first value
will be used in the default colour handling.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to generate the values for.
Returns
The value to convert to a string for the tables
Return type
tuple(float)
* Table.display_str(*value*)
Converts a value generated by
[`get_value()`](index.html#fitbenchmarking.results_processing.base_table.Table.get_value)
into a string respresentation to be used in the tables.
Base class implementation takes
the relative and absolute values and uses `self.output_string_type`
as a template for the string format. This can be overridden to
adequately display the results.
Parameters
**value** (*tuple*) – Relative and absolute values
Returns
string representation of the value for display in the table.
Return type
strAdditional functions that may need to be overridden are:
* *static* Table.get_error_str(*result*, *error_template='[{}]'*)
Get the error string for a result based on error_template
This can be overridden if tables require different error formatting.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to get the error string for
Returns
A string representation of the error
Return type
str
* Table.get_link_str(*result*)
Get the link as a string for the result.
This can be overridden if tables require different links.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to get the link for
Returns
The link to go to when the cell is selected
Return type
string
* *static* Table.vals_to_colour(*vals*, *cmap*, *cmap_range*, *log_ulim*)
Converts an array of values to a list of hexadecimal colour
strings using logarithmic sampling from a matplotlib colourmap
according to relative value.
Parameters
+ **vals** (*list**[**float**]*) – values in the range [0, 1] to convert to colour strings
+ **cmap** (*matplotlib colourmap object*) – matplotlib colourmap
+ **cmap_range** (*list**[**float**]**,* *2 elements*) – values in range [0, 1] for colourmap cropping
+ **log_ulim** (*float*) – log10 of worst shading cutoff value
Returns
Colours as hex strings for each input value and
Foreground colours for the text as html rgb strings
e.g. ‘rgb(255, 255, 255)’
Return type
tuple[list[str], list[str]]
3. Extend the `table_type` option in `OUTPUT` following the instructions in
[Adding new Options](index.html#options-extend).
4. Document the new table class is by setting the docstring to be the description of the table, and add to [FitBenchmarking Output](index.html#output).
5. Create tests for the table in
`fitbenchmarking/results_processing/tests/test_tables.py`. This is done by generating, ahead of time using the results problems constructed in
`fitbenchmarking/results_processing/tests/test_tables.generate_test_files`, both a HTML and text table output as the expected result and adding the new table name to the global variable
`SORTED_TABLE_NAMES`. This will automatically run the comparison tests for the tables.
###### HTML/CSS Templates[](#html-css-templates)
In FitBenchmarking, templates are used to generate all html output files,
and can be found in the fitbenchmarking/templates directory.
####### HTML Templates[](#html-templates)
HTML templates allow for easily adaptable outputs.
In the simple case of rearranging the page or making static changes
(changes that don’t vary with results), this can be done by editing the template, and it will appear in every subsequent HTML page.
For more complicated changes such as adding information that is page dependent,
we use [jinja](https://jinja.palletsprojects.com/en/2.11.x/).
Jinja allows code to be added to templates which can contain conditional blocks, replicated blocks, or substitutions for values from python.
Changes to what information is displayed in the page will usually involve editing the python source code to ensure you pass the appropriate values to jinja. In practice the amount of effort required to do this can range from one line of code to many depending on the object to be added.
####### CSS Templates[](#css-templates)
The CSS files contained in the templates directory are used to format the HTML pages.
If you wish to change the style of the output results
(e.g. to match a website), this is where they can be changed.
#### FitBenchmarking Contributor Documentation[](#fitbenchmarking-contributor-documentation)
Thank you for being a contributor to the FitBenchmarking project.
Here you will find all you need in order to get started.
##### Install Instructions for Contributors[](#install-instructions-for-contributors)
Please see below for the recommended install instructions for new contributors:
1. Before installing, it is recommended that you create an empty virtual environment. For more information about installing packages using virtual environments please see
[here](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
2. To ensure that you are working with the latest version of the code, you should install Fitbenchmarking from source. For details on how to do this, please see [Installing from source](index.html#installing-from-source).
Note
The editable flag should be used when installing from source, like
`pip install -e ...`, so that changes made to the cloned code are immediately reflected in the imported package.
3. So that you are able to run all tests locally, all extra dependencies and external software should be installed. Please see [Extra dependencies](index.html#extra-dependencies)
and [Installing External Software](index.html#external-instructions) for instructions on how to do this.
Note
Please note that if you are using WSL, Matlab is likely to be difficult to install so it is recommended that you use an alternative operating system if you are working on Matlab related issues.
Additionally, the tests run through Github do not currently include Matlab fitting software, so please ensure tests are run locally before submitting code.
4. You should check that the versions of packages such as pylint are up to date with those used listed in requirements.txt. This can be done by running the command
`pip install -r requirements.txt` from within the `fitbenchmarking` directory.
##### Coding Standards[](#coding-standards)
All code submitted must meet certain standards, outlined below, before it can be merged into the master branch. It is the contributor’s job to ensure that the following is satisfied, and the reviewer’s role to check that these guidelines have been followed.
The workflow to be used for submitting new code/issues is described in
[Git Workflow](index.html#workflow).
###### Linting[](#linting)
All pull requests should be [PEP 8 compliant](https://www.python.org/dev/peps/pep-0008/).
We suggest running code through [flake8](https://flake8.pycqa.org/en/latest/) and
[pylint](https://www.pylint.org/) before submitting to check for this.
###### Documentation[](#documentation)
Any new code will be accepted only if the documentation, written in
[sphinx](https://www.sphinx-doc.org/en/master/) and found in docs/,
has been updated accordingly, and the docstrings in the code have been updated where necessary.
###### Testing[](#testing)
All tests should pass before submitting code.
Tests are written using [pytest](https://docs.pytest.org/en/stable/).
The following should be checked before any code is merged:
> * Function: Does the change do what it’s supposed to?
> * Tests: Does it pass? Is there adequate coverage for new code?
> * Style: Is the coding style consistent? Is anything overly confusing?
> * Documentation: Is there a suitable change to documentation for this change?
###### Logging[](#logging)
Code should use the logging in `utils.log`. This uses Python’s built in
[logging module](https://docs.python.org/3.8/library/logging.html),
and should be used in place of any print statements to ensure that persistent logs are kept after runs.
##### Git Workflow[](#git-workflow)
###### Issues[](#issues)
All new work should start with a
[new GitHub issue](https://github.com/fitbenchmarking/fitbenchmarking/issues/new/choose)
being filed.
This should clearly explain what the change to the code will do.
There are templates for *Bug report*, *Documentation*,
*Feature request* and *Test* issues on GitHub, and you can also open a blank issue if none of these work.
If issues help meet a piece of work agreed with our funders, it is linked to the appropriate [Milestone](https://github.com/fitbenchmarking/fitbenchmarking/milestones) in GitHub.
###### Adding new code[](#adding-new-code)
The first step in adding new code is to create a branch, where the work will be done. Branches should be named according to the convention
<nnn>-description_of_work, where <nnn> is the issue number.
Please ensure our [Coding Standards](index.html#guidelines) are adhered to throughout the branch.
When you think your new code is ready to be merged into the codebase,
you should open a pull request (PR) to master.
The description should contain the words Fixes #<nnn>, where <nnn> is the issue number; this will ensure the issue is closed when the code is merged into master. At this point the automated tests will trigger, and you can see if the code passes on an independent system.
Sometimes it is desirable to open a PR when the code is not quite ready to be merged. This is a good idea, for example, if you want to get an early opinion on a coding decision. If this is the case, you should mark the PR as a *draft* on GitHub.
Once the work is ready to be reviewed, you may want to assign a reviewer,
if you think someone would be well suited to review this change. It is worth messaging them on, for example, Slack, as well as requesting their review on GitHub.
###### Release branches[](#release-branches)
Branches named release-* are protected branches; code must be approved by a reviewer before being added to them, and automated tests will be run on PRs to these branches. If code is to be included in the release, it must be pulled into this branch from master.
Release branches should have the format release-major.minor.x, starting from release-0.1.x. When the code is released, we will tag that commit with a version number v0.1.0. Any hotfixes will increment x by one, and a new tag will be created accordingly. If at some point we don’t want to provide hot-fixes to a given minor release, then the corresponding release branch may be deleted.
All changes must be initially merged into master.
There is a backport-candidate label, which must be put on PRs that in addition must be merged into the release branch.
The recommended mechanism for merging PRs labeled with backport-candidate into master is to use the Squash and merge commit option:
After such a PR (with label backport-candidate) has been merged into master, it must then subsequently be merged into the release branch as soon as possible.
It is the responsibility of the person merging such PRs to also perform this merge into the release branch.
This can be done using git cherry pick:
```
git checkout release-x.x.x git cherry-pick -x <commit-id>
git push
```
If you didn’t do a squash merge, you will have to cherry pick each commit in the PR that this being backported separately.
If you encounter problems with cherry picking into release branch please don’t hesitate to speak to an experienced member of the FitBenchmarking team.
###### Creating a release[](#creating-a-release)
In order to create a new release for FitBenchmarking, there are a few manual steps.
These have been streamlined as much as possible.
First checkout the branch to create the release from.
Releases should only be made from a release-x-x branch, not a development branch or master.
From the root of the repo run the “ci/prep_and_tag_release.sh” script with the new version number.
The version number will be rejected if it is not of the expected form.
We expect a “v” followed by the major, minor, and patch numbers,
and an optional numbered label to mark the type of release.
Possible labels are:* -beta (release for testing)
* -rc (release candidate)
This script will create a new commit with the docs and testing links updated, tag it,
and revert the change in a second commit so that the links point back to the latest versions.
These commits and the tag will need to be pushed to github.
If the release is a final release (i.e., it has no numbered label) and it is the highest numbered release, the tag should be cherry-picked onto the master branch. Instructions for this should be output after running the script.
Finally, you will need to create a release on github.
This can be done by navigating to the releases page, selecting new release and typing in the tag that was given to the release
(it should tell you the tag exists at this point!).
For example, For a first beta version of release 0.1.0, one would run:
```
git checkout release-0.1.x ci/prep_and_tag_release.sh v0.1.0-beta1 git push origin release-0.1.x git push origin v0.1.0-beta1
<And make the release on GitHub>
```
And for after the version is tested and ready for a final release, one would run:
```
git checkout release-0.1.x ci/prep_and_tag_release.sh v0.1.0 git push origin release-0.1.x git push origin v0.1.0 git switch master git cherry-pick v0.1.0 git push
<And make the release on GitHub>
```
###### Adding New Datasets[](#adding-new-datasets)
Users or developers are encouraged to add new data sets following the instructions [here](index.html#adding-data). Someone in SCD’s Computational Mathematics Group must make this publically available by:
* adding the zip and tar.gz archives to powell:/var/www/html/numerical-www/fitbenchmarking/
* adding the datasets to the master examples.zip and examples.tar.gz folders, and updating the versions on powell
Please note that the maximum file size allowed by GitHub is 100MB, and the total repository size is recommended to be kept below 1GB. Please bear this in mind when advising users whether or not they should also add their data to the examples/benchmark_problems directory of FitBenchmarking.
##### Repository Structure[](#repository-structure)
At the root of the repository there are six directories:
> * build
> * ci
> * Docker
> * docs
> * examples
> * fitbenchmarking
###### Build (`build`)[](#build-build)
This directory contains scripts to allow for installing packages such as Mantid through setuptools.
###### CI (`ci`)[](#ci-ci)
We use [GitHub Actions](https://github.com/fitbenchmarking/fitbenchmarking/actions)
to run our Continuous Integration tests.
The specific tests run are defined in a series of Bash scripts,
which are stored in this folder.
###### Docker (`Docker`)[](#docker-docker)
The continuous integration process on Github Actions currently run on a Docker container,
and this directory holds the Dockerfiles. The Docker containers are hosted on Dockerhub.
`BasicInstall` holds the Dockerfile that is pushed to the repository `fitbenchmarking/fitbenchmarking-deps`, the latest of which should have the tag `latest`. This contains a basic Ubuntu install, with just the minimal infrastructure needed to run the tests.
`FullInstall` holds the Dockerfile that is pushed to the repository `fitbenchmarking/fitbenchmarking-extras`, the latest of which should have the tag `latest`. This is built on top of the basic container, and includes optional third party software that FitBenchmarking can work with.
The versions on Docker Hub can be updated from a connected account by issuing the commands:
```
docker build --tag fitbenchmarking-<type>:<tag> .
docker tag fitbenchmarking-<type>:<tag> fitbenchmarking/fitbenchmarking-<type>:<tag>
docker push fitbenchmarking/fitbenchmarking-<type>:<tag>
```
where `<type>` is, e.g., `deps` or `extras`, and `<tag>` is, e.g., `latest`.
###### Documentation (`docs`)[](#documentation-docs)
The documentation for FitBenchmarking is stored in this folder under
`source`.
A local copy of the documentation can be build using `make html` in the
`build` directory.
###### Examples (`examples`)[](#examples-examples)
Examples is used to store sample problem files and options files.
A collection of problem files can be found organised into datasets within the
`examples/benchmark_problems/` directory.
An options template file and a prewritten options file to run a full set of minimizers is also made available in the `examples/` directory.
###### FitBenchmarking Package ([`fitbenchmarking`](index.html#module-fitbenchmarking))[](#fitbenchmarking-package-fitbenchmarking)
The main FitBenchmarking package is split across several directories with the intention that it is easily extensible.
The majority of these directories are source code, with exceptions being Templates, Test Files, and System Tests.
Each file that contains source code will have a directory inside it called
`tests`, which contains all of the tests for that section of the code.
####### Benchmark Problems (`benchmark_problems`)[](#benchmark-problems-benchmark-problems)
This is a copy of the NIST benchmark problems from examples/benchmark_problems.
These are the default problems that are run if no problem is passed to the
`fitbenchmarking` command, and is copied here so that it is distributed with the package when installed using, say, pip.
####### CLI ([`cli`](index.html#module-fitbenchmarking.cli))[](#cli-cli)
The CLI directory is used to define all of the entry points into the software.
Currently this is just the main fitbenchmarking command.
####### Controllers ([`controllers`](index.html#module-fitbenchmarking.controllers))[](#controllers-controllers)
In FitBenchmarking, controllers are used to interface with third party minimizers.
The controllers directory holds a base controller class
([`Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)) and all its subclasses,
each of which of which interfaces with a different fitting package. The controllers currently implemented are described in [Fitting Options](index.html#fitting-option) and [Minimizer Options](index.html#minimizer-option).
New controllers can be added by following the instructions in [Adding Fitting Software](index.html#controllers).
####### Core ([`core`](index.html#module-fitbenchmarking.core))[](#core-core)
This directory holds all code central to FitBenchmarking.
For example, this manages calling the correct parser and controller, as well as compiling the results into a data object.
####### Hessian (`Hessian`)[](#hessian-hessian)
This directory holds the [`Hessian`](index.html#fitbenchmarking.hessian.base_hessian.Hessian) class,
and subclasses, which are used by the controllers to approximate second derivatives.
Currently available options are described in [Fitting Options](index.html#fitting-option), and new Hessians can be added by following the instructions in [Adding new Hessians](index.html#hessian-extend).
####### Jacobian ([`jacobian`](index.html#module-fitbenchmarking.jacobian))[](#jacobian-jacobian)
This directory holds the [`Jacobian`](index.html#fitbenchmarking.jacobian.base_jacobian.Jacobian) class,
and subclasses, which are used by the controllers to approximate derivatives.
Currently available options are described in [Jacobian Options](index.html#jacobian-option), and new numerical Jacobians can be added by following the instructions in
[Adding new Jacobians](index.html#jacobian-extend).
####### Test Files (`test_files`)[](#test-files-test-files)
The test files are used in some tests where full problem files are required.
These are here so that the examples can be moved without breaking the tests.
####### Parsing ([`parsing`](index.html#module-fitbenchmarking.parsing))[](#parsing-parsing)
The parsers read raw data into a format that FitBenchmarking can use.
This directory holds a base parser,
[`Parser`](index.html#fitbenchmarking.parsing.base_parser.Parser) and all its subclasses.
Each subclass implements a parser for a specific file format.
Information about existing parsers can be found in [Problem Definition Files](index.html#problem-def), and see [Adding Fitting Problem Definition Types](index.html#parsers) for instructions on extending these.
####### Results Processing ([`results_processing`](index.html#module-fitbenchmarking.results_processing))[](#results-processing-results-processing)
All files that are used to generate output are stored here.
This includes index pages, text/html tables, plots, and support pages.
Information about the tables we provide can be found in
[FitBenchmarking Output](index.html#output), and instructions on how to add further tables and change the formatting of the displayed information can be found in [Amending FitBenchmarking Outputs](index.html#extending-outputs).
####### System Tests (`systests`)[](#system-tests-systests)
FitBenchmarking runs regression tests to check that the accuracy results do not change with updates to the code.
These tests run fitbenchmarking against a subset of problems
(in subdirectories of /fitbenchmarking/test_files/),
and compares the text output with that stored in
/fitbenchmarking/systests/expected_results/.
####### Templates (`templates`)[](#templates-templates)
Files in Templates are used to create the resulting html pages, and are a combination of css, html, and python files.
The python files in this directory are scripts to update the css and html assets.
Instructions on updating these can be found in [HTML/CSS Templates](index.html#templates).
####### Utils ([`utils`](index.html#module-fitbenchmarking.utils))[](#utils-utils)
This directory contains utility functions that do not fit into the above sections.
This includes the [`Options`](index.html#fitbenchmarking.utils.options.Options)
class (see [Adding new Options](index.html#options-extend) to extend)
and [`FittingResult`](index.html#fitbenchmarking.utils.fitbm_result.FittingResult) class,
as well as functions for logging and directory creation.
##### fitbenchmarking package[](#fitbenchmarking-package)
###### Subpackages[](#subpackages)
####### fitbenchmarking.cli package[](#fitbenchmarking-cli-package)
######## Submodules[](#submodules)
######### fitbenchmarking.cli.checkpoint_handler module[](#module-fitbenchmarking.cli.checkpoint_handler)
This is a command line tool for handling checkpoint files for the FitBenchmarking software package.
For more information on usage type fitbenchmarking –help or for more general information, see the online docs at docs.fitbenchmarking.com.
fitbenchmarking.cli.checkpoint_handler.generate_report(*options_file=''*, *additional_options=None*, *debug=False*)[](#fitbenchmarking.cli.checkpoint_handler.generate_report)
Generate the fitting reports and tables for a checkpoint file.
Parameters
* **options_file** (*str**,* *optional*) – Path to an options file, defaults to ‘’
* **additional_options** (*dict**,* *optional*) – Extra options for the reporting.
Available keys are:
> filename (str): The checkpoint file to use.
fitbenchmarking.cli.checkpoint_handler.get_parser() → argparse.ArgumentParser[](#fitbenchmarking.cli.checkpoint_handler.get_parser)
Creates and returns a parser for the args.
Returns Configured argument parser
Return type ArgumentParser
fitbenchmarking.cli.checkpoint_handler.main()[](#fitbenchmarking.cli.checkpoint_handler.main)
Entry point exposed as the fitbenchmarking-cp command.
######### fitbenchmarking.cli.exception_handler module[](#module-fitbenchmarking.cli.exception_handler)
This file holds an exception handler decorator that should wrap all cli functions to provide cleaner output.
fitbenchmarking.cli.exception_handler.exception_handler(*f*)[](#fitbenchmarking.cli.exception_handler.exception_handler)
Decorator to simplify handling exceptions within FitBenchmarking This will strip off any ‘debug’ inputs.
Parameters
**f** (*python function*) – The function to wrap
######### fitbenchmarking.cli.main module[](#fitbenchmarking-cli-main-module)
######## Module contents[](#module-fitbenchmarking.cli)
####### fitbenchmarking.controllers package[](#fitbenchmarking-controllers-package)
######## Submodules[](#submodules)
######### fitbenchmarking.controllers.base_controller module[](#module-fitbenchmarking.controllers.base_controller)
Implements the base class for the fitting software controllers.
*class* fitbenchmarking.controllers.base_controller.Controller(*cost_func*)[](#fitbenchmarking.controllers.base_controller.Controller)
Bases: `object`
Base class for all fitting software controllers.
These controllers are intended to be the only interface into the fitting software, and should do so by implementing the abstract classes defined here.
VALID_FLAGS *= [0, 1, 2, 3, 4, 5, 6, 7]*[](#fitbenchmarking.controllers.base_controller.Controller.VALID_FLAGS)
algorithm_check *= {'all': [], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': [], 'gauss_newton': [], 'general': [], 'global_optimization': [], 'levenberg-marquardt': [], 'ls': [], 'simplex': [], 'steepest_descent': [], 'trust_region': []}*[](#fitbenchmarking.controllers.base_controller.Controller.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
check_attributes()[](#fitbenchmarking.controllers.base_controller.Controller.check_attributes)
A helper function which checks all required attributes are set in software controllers
check_bounds_respected()[](#fitbenchmarking.controllers.base_controller.Controller.check_bounds_respected)
Check whether the selected minimizer has respected parameter bounds
check_minimizer_bounds(*minimizer*)[](#fitbenchmarking.controllers.base_controller.Controller.check_minimizer_bounds)
Helper function which checks whether the selected minimizer from the options (options.minimizer) supports problems with parameter bounds
Parameters
**minimizer** (*str*) – string of minimizers selected from the options
*abstract* cleanup()[](#fitbenchmarking.controllers.base_controller.Controller.cleanup)
Retrieve the result as a numpy array and store results.
Convert the fitted parameters into a numpy array, saved to
`self.final_params`, and store the error flag as `self.flag`.
The flag corresponds to the following messages:
flag()
0: Successfully converged 1: Software reported maximum number of iterations exceeded 2: Software run but didn’t converge to solution 3: Software raised an exception 4: Solver doesn’t support bounded problems 5: Solution doesn’t respect parameter bounds 6: Solver has exceeded maximum allowed runtime 7: Validation of the provided options failed
controller_name *= None*[](#fitbenchmarking.controllers.base_controller.Controller.controller_name)
A name to be used in tables. If this is set to None it will be inferred from the class name.
eval_chisq(*params*, *x=None*, *y=None*, *e=None*)[](#fitbenchmarking.controllers.base_controller.Controller.eval_chisq)
Computes the chisq value
Parameters
* **params** (*list*) – The parameters to calculate residuals for
* **x** (*numpy array**,* *optional*) – x data points, defaults to self.data_x
* **y** (*numpy array**,* *optional*) – y data points, defaults to self.data_y
* **e** (*numpy array**,* *optional*) – error at each data point, defaults to self.data_e
Returns The sum of squares of residuals for the datapoints at the given parameters
Return type numpy array
execute()[](#fitbenchmarking.controllers.base_controller.Controller.execute)
Starts and stops the timer used to check if the fit reaches the ‘max_runtime’. In the middle, it calls self.fit().
*abstract* fit()[](#fitbenchmarking.controllers.base_controller.Controller.fit)
Run the fitting.
This will be timed so should include only what is needed to fit the data.
*property* flag[](#fitbenchmarking.controllers.base_controller.Controller.flag)
0: Successfully converged 1: Software reported maximum number of iterations exceeded 2: Software run but didn’t converge to solution 3: Software raised an exception 4: Solver doesn’t support bounded problems 5: Solution doesn’t respect parameter bounds 6: Solver has exceeded maximum allowed runtime 7: Validation of the provided options failed
hessian_enabled_solvers *= []*[](#fitbenchmarking.controllers.base_controller.Controller.hessian_enabled_solvers)
Within the controller class, you must define the list
`hessian_enabled_solvers` if any of the minimizers for the specific software are able to use hessian information.
* `hessian_enabled_solvers`: a list of minimizers in a specific
software that allow Hessian information to be passed into the fitting algorithm
incompatible_problems *= []*[](#fitbenchmarking.controllers.base_controller.Controller.incompatible_problems)
A list of incompatible problem formats for this controller.
jacobian_enabled_solvers *= []*[](#fitbenchmarking.controllers.base_controller.Controller.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
prepare()[](#fitbenchmarking.controllers.base_controller.Controller.prepare)
Check that function and minimizer have been set.
If both have been set, run self.setup().
record_alg_type(*minimizer*, *algorithm_type*)[](#fitbenchmarking.controllers.base_controller.Controller.record_alg_type)
Helper function which records the algorithm types of the selected minimizer that match those chosen in options
Parameters
* **minimizer** (*str*) – string of minimizers selected from the options
* **algorithm_type** (*list*) – the algorithm type selected from the options
*abstract* setup()[](#fitbenchmarking.controllers.base_controller.Controller.setup)
Setup the specifics of the fitting.
Anything needed for “fit” that can only be done after knowing the minimizer to use and the function to fit should be done here.
Any variables needed should be saved to self (as class attributes).
If a solver supports bounded problems, then this is where value_ranges should be set up for that specific solver. The default format is a list of tuples containing the lower and upper bounds for each parameter e.g. [(p1_lb, p2_ub), (p2_lb, p2_ub),…]
*property* software[](#fitbenchmarking.controllers.base_controller.Controller.software)
Return the name of the software.
This assumes the class is named ‘<software>Controller’
validate() → None[](#fitbenchmarking.controllers.base_controller.Controller.validate)
Validates that the provided options are compatible with each other.
If there are some invalid options, the relevant exception is raised.
validate_minimizer(*minimizer*, *algorithm_type*)[](#fitbenchmarking.controllers.base_controller.Controller.validate_minimizer)
Helper function which checks that the selected minimizer from the options (options.minimizer) exists and whether the minimizer is in self.algorithm_check[options.algorithm_type] (this is a list set in the controller)
Parameters
* **minimizer** (*str*) – string of minimizers selected from the options
* **algorithm_type** (*list*) – the algorithm type selected from the options
######### fitbenchmarking.controllers.bumps_controller module[](#module-fitbenchmarking.controllers.bumps_controller)
Implements a controller for the Bumps fitting software.
*class* fitbenchmarking.controllers.bumps_controller.BumpsController(*cost_func*)[](#fitbenchmarking.controllers.bumps_controller.BumpsController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the Bumps fitting software.
Sasview requires a model to fit.
Setup creates a model with the correct function.
algorithm_check *= {'all': ['amoeba', 'lm-bumps', 'newton', 'de', 'scipy-leastsq', 'dream'], 'bfgs': ['newton'], 'conjugate_gradient': [], 'deriv_free': ['amoeba', 'de', 'dream'], 'gauss_newton': [], 'general': ['amoeba', 'newton', 'de', 'dream'], 'global_optimization': ['de', 'dream'], 'levenberg-marquardt': ['lm-bumps', 'scipy-leastsq'], 'ls': ['lm-bumps', 'scipy-leastsq'], 'simplex': ['amoeba'], 'steepest_descent': [], 'trust_region': ['lm-bumps', 'scipy-leastsq']}*[](#fitbenchmarking.controllers.bumps_controller.BumpsController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.bumps_controller.BumpsController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
fit()[](#fitbenchmarking.controllers.bumps_controller.BumpsController.fit)
Run problem with Bumps.
setup()[](#fitbenchmarking.controllers.bumps_controller.BumpsController.setup)
Setup problem ready to run with Bumps.
Creates a FitProblem for calling in the fit() function of Bumps
######### fitbenchmarking.controllers.ceres_controller module[](#module-fitbenchmarking.controllers.ceres_controller)
Implements a controller for the Ceres fitting software.
*class* fitbenchmarking.controllers.ceres_controller.CeresController(*cost_func*)[](#fitbenchmarking.controllers.ceres_controller.CeresController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for Ceres Solver
algorithm_check *= {'all': ['Levenberg_Marquardt', 'Dogleg', 'BFGS', 'LBFGS', 'steepest_descent', 'Fletcher_Reeves', 'Polak_Ribiere', 'Hestenes_Stiefel'], 'bfgs': ['BFGS', 'LBFGS'], 'conjugate_gradient': ['Fletcher_Reeves', 'Polak_Ribiere', 'Hestenes_Stiefel'], 'deriv_free': [], 'gauss_newton': [], 'general': [], 'global_optimization': [], 'levenberg-marquardt': [], 'ls': ['Levenberg_Marquardt', 'Dogleg', 'BFGS', 'LBFGS', 'steepest_descent', 'Fletcher_Reeves', 'Polak_Ribiere', 'Hestenes_Stiefel'], 'simplex': [], 'steepest_descent': ['steepest_descent'], 'trust_region': ['Levenberg_Marquardt', 'Dogleg']}*[](#fitbenchmarking.controllers.ceres_controller.CeresController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.ceres_controller.CeresController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from
fit()[](#fitbenchmarking.controllers.ceres_controller.CeresController.fit)
Run problem with Ceres solver
jacobian_enabled_solvers *= ['Levenberg_Marquardt', 'Dogleg', 'BFGS', 'LBFGS', 'steepest_descent', 'Fletcher_Reeves', 'Polak_Ribiere', 'Hestenes_Stiefel']*[](#fitbenchmarking.controllers.ceres_controller.CeresController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.ceres_controller.CeresController.setup)
Setup problem ready to be run with Ceres solver
*class* fitbenchmarking.controllers.ceres_controller.CeresCostFunction(**args: Any*, ***kwargs: Any*)[](#fitbenchmarking.controllers.ceres_controller.CeresCostFunction)
Bases: `PyCeres.CostFunction`
Cost function for Ceres solver
Evaluate(*parameters*, *residuals*, *jacobians*)[](#fitbenchmarking.controllers.ceres_controller.CeresCostFunction.Evaluate)
Evaluate for Ceres solver
######### fitbenchmarking.controllers.controller_factory module[](#module-fitbenchmarking.controllers.controller_factory)
This file contains a factory implementation for the controllers.
This is used to manage the imports and reduce effort in adding new controllers.
*class* fitbenchmarking.controllers.controller_factory.ControllerFactory[](#fitbenchmarking.controllers.controller_factory.ControllerFactory)
Bases: `object`
A factory for creating software controllers.
This has the capability to select the correct controller, import it, and generate an instance of it.
Controllers generated from this must be a subclass of base_controller.Controller
*static* create_controller(*software*)[](#fitbenchmarking.controllers.controller_factory.ControllerFactory.create_controller)
Create a controller that matches the required software.
Parameters
**software** (*string*) – The name of the software to create a controller for
Returns Controller class for the problem
Return type fitbenchmarking.fitting.base_controller.Controller subclass
######### fitbenchmarking.controllers.dfo_controller module[](#module-fitbenchmarking.controllers.dfo_controller)
Implements a controller for DFO-GN
<http://people.maths.ox.ac.uk/robertsl/dfogn/*class* fitbenchmarking.controllers.dfo_controller.DFOController(*cost_func*)[](#fitbenchmarking.controllers.dfo_controller.DFOController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the DFO-{GN/LS} fitting software.
algorithm_check *= {'all': ['dfogn', 'dfols'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': ['dfogn', 'dfols'], 'gauss_newton': ['dfogn'], 'general': [], 'global_optimization': [], 'levenberg-marquardt': [], 'ls': ['dfogn', 'dfols'], 'simplex': [], 'steepest_descent': [], 'trust_region': ['dfols', 'dfogn']}*[](#fitbenchmarking.controllers.dfo_controller.DFOController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.dfo_controller.DFOController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
fit()[](#fitbenchmarking.controllers.dfo_controller.DFOController.fit)
Run problem with DFO.
setup()[](#fitbenchmarking.controllers.dfo_controller.DFOController.setup)
Setup for DFO
######### fitbenchmarking.controllers.gofit_controller module[](#module-fitbenchmarking.controllers.gofit_controller)
Implements a controller for global optimization algorithms.
*class* fitbenchmarking.controllers.gofit_controller.GOFitController(*cost_func*)[](#fitbenchmarking.controllers.gofit_controller.GOFitController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for GOFit multistart global optimisation algorithms
algorithm_check *= {'all': ['alternating', 'multistart', 'regularisation'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': [], 'gauss_newton': ['regularisation'], 'general': [], 'global_optimization': ['alternating', 'multistart'], 'levenberg-marquardt': ['regularisation'], 'ls': ['alternating', 'multistart', 'regularisation'], 'simplex': [], 'steepest_descent': [], 'trust_region': []}*[](#fitbenchmarking.controllers.gofit_controller.GOFitController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.gofit_controller.GOFitController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from
fit()[](#fitbenchmarking.controllers.gofit_controller.GOFitController.fit)
Run problem with GoFit
jacobian_enabled_solvers *= ['multistart', 'regularisation']*[](#fitbenchmarking.controllers.gofit_controller.GOFitController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.gofit_controller.GOFitController.setup)
Setup problem ready to be run with GoFit
######### fitbenchmarking.controllers.gradient_free_controller module[](#module-fitbenchmarking.controllers.gradient_free_controller)
Implements a controller for Gradient Free Optimizers
*class* fitbenchmarking.controllers.gradient_free_controller.GradientFreeController(*cost_func*)[](#fitbenchmarking.controllers.gradient_free_controller.GradientFreeController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the Gradient Free Optimizers fitting software.
algorithm_check *= {'all': ['HillClimbingOptimizer', 'RepulsingHillClimbingOptimizer', 'SimulatedAnnealingOptimizer', 'RandomSearchOptimizer', 'RandomRestartHillClimbingOptimizer', 'RandomAnnealingOptimizer', 'ParallelTemperingOptimizer', 'ParticleSwarmOptimizer', 'EvolutionStrategyOptimizer', 'BayesianOptimizer', 'TreeStructuredParzenEstimators', 'DecisionTreeOptimizer'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': ['HillClimbingOptimizer', 'RepulsingHillClimbingOptimizer', 'SimulatedAnnealingOptimizer', 'RandomSearchOptimizer', 'RandomRestartHillClimbingOptimizer', 'RandomAnnealingOptimizer', 'ParallelTemperingOptimizer', 'ParticleSwarmOptimizer', 'EvolutionStrategyOptimizer', 'BayesianOptimizer', 'TreeStructuredParzenEstimators', 'DecisionTreeOptimizer'], 'gauss_newton': [], 'general': ['HillClimbingOptimizer', 'RepulsingHillClimbingOptimizer', 'SimulatedAnnealingOptimizer', 'RandomSearchOptimizer', 'RandomRestartHillClimbingOptimizer', 'RandomAnnealingOptimizer', 'ParallelTemperingOptimizer', 'ParticleSwarmOptimizer', 'EvolutionStrategyOptimizer', 'BayesianOptimizer', 'TreeStructuredParzenEstimators', 'DecisionTreeOptimizer'], 'global_optimization': ['HillClimbingOptimizer', 'RepulsingHillClimbingOptimizer', 'SimulatedAnnealingOptimizer', 'RandomSearchOptimizer', 'RandomRestartHillClimbingOptimizer', 'RandomAnnealingOptimizer', 'ParallelTemperingOptimizer', 'ParticleSwarmOptimizer', 'EvolutionStrategyOptimizer', 'BayesianOptimizer', 'TreeStructuredParzenEstimators', 'DecisionTreeOptimizer'], 'levenberg-marquardt': [], 'ls': [], 'simplex': [], 'steepest_descent': [], 'trust_region': []}*[](#fitbenchmarking.controllers.gradient_free_controller.GradientFreeController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.gradient_free_controller.GradientFreeController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
controller_name *= 'gradient_free'*[](#fitbenchmarking.controllers.gradient_free_controller.GradientFreeController.controller_name)
A name to be used in tables. If this is set to None it will be inferred from the class name.
fit()[](#fitbenchmarking.controllers.gradient_free_controller.GradientFreeController.fit)
Run problem with Gradient Free Optimizers
setup()[](#fitbenchmarking.controllers.gradient_free_controller.GradientFreeController.setup)
Setup for Gradient Free Optimizers
######### fitbenchmarking.controllers.gsl_controller module[](#module-fitbenchmarking.controllers.gsl_controller)
Implements a controller for GSL
<https://www.gnu.org/software/gsl/>
using the pyGSL python interface
<https://sourceforge.net/projects/pygsl/*class* fitbenchmarking.controllers.gsl_controller.GSLController(*cost_func*)[](#fitbenchmarking.controllers.gsl_controller.GSLController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the GSL fitting software
algorithm_check *= {'all': ['lmsder', 'lmder', 'nmsimplex', 'nmsimplex2', 'conjugate_pr', 'conjugate_fr', 'vector_bfgs', 'vector_bfgs2', 'steepest_descent'], 'bfgs': ['vector_bfgs', 'vector_bfgs2'], 'conjugate_gradient': ['conjugate_fr', 'conjugate_pr'], 'deriv_free': ['nmsimplex', 'nmsimplex2'], 'gauss_newton': [], 'general': ['nmsimplex', 'nmsimplex2', 'conjugate_pr', 'conjugate_fr', 'vector_bfgs', 'vector_bfgs2', 'steepest_descent'], 'global_optimization': [], 'levenberg-marquardt': ['lmder', 'lmsder'], 'ls': ['lmsder', 'lmder'], 'simplex': ['nmsimplex', 'nmsimplex2'], 'steepest_descent': ['steepest_descent'], 'trust_region': ['lmder', 'lmsder']}*[](#fitbenchmarking.controllers.gsl_controller.GSLController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.gsl_controller.GSLController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from
fit()[](#fitbenchmarking.controllers.gsl_controller.GSLController.fit)
Run problem with GSL
jacobian_enabled_solvers *= ['lmsder', 'lmder', 'conjugate_pr', 'conjugate_fr', 'vector_bfgs', 'vector_bfgs2', 'steepest_descent']*[](#fitbenchmarking.controllers.gsl_controller.GSLController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.gsl_controller.GSLController.setup)
Setup for GSL
######### fitbenchmarking.controllers.horace_controller module[](#module-fitbenchmarking.controllers.horace_controller)
Implements a controller to the lm implementation in Herbert/Horace
*class* fitbenchmarking.controllers.horace_controller.HoraceController(*cost_func*)[](#fitbenchmarking.controllers.horace_controller.HoraceController)
Bases: [`fitbenchmarking.controllers.matlab_mixin.MatlabMixin`](index.html#fitbenchmarking.controllers.matlab_mixin.MatlabMixin), [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for fit in Herbert
algorithm_check *= {'all': ['lm-lsqr'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': ['lm-lsqr'], 'gauss_newton': [], 'general': [], 'global_optimization': [], 'levenberg-marquardt': ['lm-lsqr'], 'ls': ['lm-lsqr'], 'simplex': [], 'steepest_descent': [], 'trust_region': []}*[](#fitbenchmarking.controllers.horace_controller.HoraceController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.horace_controller.HoraceController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
fit()[](#fitbenchmarking.controllers.horace_controller.HoraceController.fit)
Run problem with Horace
horace_on()[](#fitbenchmarking.controllers.horace_controller.HoraceController.horace_on)
Turning Horace on in matlab
incompatible_problems *= ['mantid']*[](#fitbenchmarking.controllers.horace_controller.HoraceController.incompatible_problems)
A list of incompatible problem formats for this controller.
setup()[](#fitbenchmarking.controllers.horace_controller.HoraceController.setup)
Setup for Matlab fitting
######### fitbenchmarking.controllers.levmar_controller module[](#module-fitbenchmarking.controllers.levmar_controller)
Implements a controller for the levmar fitting software.
<http://www.ics.forth.gr/~lourakis/levmar/>
via the python interface
<https://pypi.org/project/levmar/*class* fitbenchmarking.controllers.levmar_controller.LevmarController(*cost_func*)[](#fitbenchmarking.controllers.levmar_controller.LevmarController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the levmar fitting software
algorithm_check *= {'all': ['levmar'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': [], 'gauss_newton': [], 'general': [], 'global_optimization': [], 'levenberg-marquardt': ['levmar'], 'ls': ['levmar'], 'simplex': [], 'steepest_descent': [], 'trust_region': ['levmar']}*[](#fitbenchmarking.controllers.levmar_controller.LevmarController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.levmar_controller.LevmarController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
fit()[](#fitbenchmarking.controllers.levmar_controller.LevmarController.fit)
run problem with levmar
jacobian_enabled_solvers *= ['levmar']*[](#fitbenchmarking.controllers.levmar_controller.LevmarController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.levmar_controller.LevmarController.setup)
Setup problem ready to be run with levmar
######### fitbenchmarking.controllers.lmfit_controller module[](#module-fitbenchmarking.controllers.lmfit_controller)
Implements a controller for the lmfit fitting software.
*class* fitbenchmarking.controllers.lmfit_controller.LmfitController(*cost_func*)[](#fitbenchmarking.controllers.lmfit_controller.LmfitController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for lmfit
algorithm_check *= {'all': ['differential_evolution', 'powell', 'cobyla', 'slsqp', 'emcee', 'nelder', 'least_squares', 'trust-ncg', 'trust-exact', 'trust-krylov', 'trust-constr', 'dogleg', 'leastsq', 'newton', 'tnc', 'lbfgsb', 'bfgs', 'cg', 'ampgo', 'shgo', 'dual_annealing'], 'bfgs': ['lbfgsb', 'bfgs'], 'conjugate_gradient': ['cg', 'newton', 'powell'], 'deriv_free': ['powell', 'cobyla', 'emcee', 'nelder', 'differential_evolution'], 'gauss_newton': ['newton', 'tnc'], 'general': ['nelder', 'powell', 'cg', 'bfgs', 'newton', 'lbfgs', 'tnc', 'slsqp', 'differential_evolution', 'shgo', 'dual_annealing'], 'global_optimization': ['differential_evolution', 'ampgo', 'shgo', 'dual_annealing'], 'levenberg-marquardt': ['leastsq'], 'ls': ['least_squares', 'leastsq'], 'simplex': ['nelder'], 'steepest_descent': [], 'trust_region': ['least_squares', 'trust-ncg', 'trust-exact', 'trust-krylov', 'trust-constr', 'dogleg']}*[](#fitbenchmarking.controllers.lmfit_controller.LmfitController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.lmfit_controller.LmfitController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from
fit()[](#fitbenchmarking.controllers.lmfit_controller.LmfitController.fit)
Run problem with lmfit
hessian_enabled_solvers *= ['newton', 'dogleg', 'trust-constr', 'trust-ncg', 'trust-krylov', 'trust-exact']*[](#fitbenchmarking.controllers.lmfit_controller.LmfitController.hessian_enabled_solvers)
Within the controller class, you must define the list
`hessian_enabled_solvers` if any of the minimizers for the specific software are able to use hessian information.
* `hessian_enabled_solvers`: a list of minimizers in a specific
software that allow Hessian information to be passed into the fitting algorithm
jacobian_enabled_solvers *= ['cg', 'bfgs', 'newton', 'lbfgsb', 'tnc', 'slsqp', 'dogleg', 'trust-ncg', 'trust-krylov', 'trust-exact']*[](#fitbenchmarking.controllers.lmfit_controller.LmfitController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
lmfit_jacobians(*params*)[](#fitbenchmarking.controllers.lmfit_controller.LmfitController.lmfit_jacobians)
lmfit jacobians
lmfit_resdiuals(*params*)[](#fitbenchmarking.controllers.lmfit_controller.LmfitController.lmfit_resdiuals)
lmfit resdiuals
setup()[](#fitbenchmarking.controllers.lmfit_controller.LmfitController.setup)
Setup problem ready to be run with lmfit
######### fitbenchmarking.controllers.mantid_controller module[](#module-fitbenchmarking.controllers.mantid_controller)
Implements a controller for the Mantid fitting software.
*class* fitbenchmarking.controllers.mantid_controller.MantidController(*cost_func*)[](#fitbenchmarking.controllers.mantid_controller.MantidController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the Mantid fitting software.
Mantid requires subscribing a custom function in a predefined format,
so this controller creates that in setup.
COST_FUNCTION_MAP *= {'nlls': 'Unweighted least squares', 'poisson': 'Poisson', 'weighted_nlls': 'Least squares'}*[](#fitbenchmarking.controllers.mantid_controller.MantidController.COST_FUNCTION_MAP)
A map from fitbenchmarking cost functions to mantid ones.
algorithm_check *= {'all': ['BFGS', 'Conjugate gradient (Fletcher-Reeves imp.)', 'Conjugate gradient (Polak-Ribiere imp.)', 'Damped GaussNewton', 'Levenberg-Marquardt', 'Levenberg-MarquardtMD', 'Simplex', 'SteepestDescent', 'Trust Region', 'FABADA'], 'bfgs': ['BFGS'], 'conjugate_gradient': ['Conjugate gradient (Fletcher-Reeves imp.)', 'Conjugate gradient (Polak-Ribiere imp.)'], 'deriv_free': ['Simplex', 'FABADA'], 'gauss_newton': ['Damped GaussNewton'], 'general': ['BFGS', 'Conjugate gradient (Fletcher-Reeves imp.)', 'Conjugate gradient (Polak-Ribiere imp.)', 'Damped GaussNewton', 'Simplex', 'SteepestDescent'], 'global_optimization': ['FABADA'], 'levenberg-marquardt': ['Levenberg-Marquardt', 'Levenberg-MarquardtMD'], 'ls': ['Levenberg-Marquardt', 'Levenberg-MarquardtMD', 'Trust Region', 'FABADA'], 'simplex': ['Simplex'], 'steepest_descent': ['SteepestDescent'], 'trust_region': ['Trust Region', 'Levenberg-Marquardt', 'Levenberg-MarquardtMD']}*[](#fitbenchmarking.controllers.mantid_controller.MantidController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.mantid_controller.MantidController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
eval_chisq(*params*, *x=None*, *y=None*, *e=None*)[](#fitbenchmarking.controllers.mantid_controller.MantidController.eval_chisq)
Computes the chisq value.
If multi-fit inputs will be lists and this will return a list of chi squared of params[i], x[i], y[i], and e[i].
Parameters
* **params** (*list of float* *or* *list of list of float*) – The parameters to calculate residuals for
* **x** (*numpy array* *or* *list of numpy arrays**,* *optional*) – x data points, defaults to self.data_x
* **y** (*numpy array* *or* *list of numpy arrays**,* *optional*) – y data points, defaults to self.data_y
* **e** (*numpy array* *or* *list of numpy arrays**,* *optional*) – error at each data point, defaults to self.data_e
Returns The sum of squares of residuals for the datapoints at the given parameters
Return type numpy array
fit()[](#fitbenchmarking.controllers.mantid_controller.MantidController.fit)
Run problem with Mantid.
jacobian_enabled_solvers *= ['BFGS', 'Conjugate gradient (Fletcher-Reeves imp.)', 'Conjugate gradient (Polak-Ribiere imp.)', 'Damped GaussNewton', 'Levenberg-Marquardt', 'Levenberg-MarquardtMD', 'SteepestDescent', 'Trust Region']*[](#fitbenchmarking.controllers.mantid_controller.MantidController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.mantid_controller.MantidController.setup)
Setup problem ready to run with Mantid.
Adds a custom function to Mantid for calling in fit().
######### fitbenchmarking.controllers.matlab_controller module[](#module-fitbenchmarking.controllers.matlab_controller)
Implements a controller for MATLAB
*class* fitbenchmarking.controllers.matlab_controller.MatlabController(*cost_func*)[](#fitbenchmarking.controllers.matlab_controller.MatlabController)
Bases: [`fitbenchmarking.controllers.matlab_mixin.MatlabMixin`](index.html#fitbenchmarking.controllers.matlab_mixin.MatlabMixin), [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for MATLAB fitting (fminsearch)
algorithm_check *= {'all': ['Nelder-Mead Simplex'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': ['Nelder-Mead Simplex'], 'gauss_newton': [], 'general': ['Nelder-Mead Simplex'], 'global_optimization': [], 'levenberg-marquardt': [], 'ls': [], 'simplex': ['Nelder-Mead Simplex'], 'steepest_descent': [], 'trust_region': []}*[](#fitbenchmarking.controllers.matlab_controller.MatlabController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.matlab_controller.MatlabController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
fit()[](#fitbenchmarking.controllers.matlab_controller.MatlabController.fit)
Run problem with Matlab
incompatible_problems *= ['mantid']*[](#fitbenchmarking.controllers.matlab_controller.MatlabController.incompatible_problems)
A list of incompatible problem formats for this controller.
setup()[](#fitbenchmarking.controllers.matlab_controller.MatlabController.setup)
Setup for Matlab fitting
######### fitbenchmarking.controllers.matlab_curve_controller module[](#module-fitbenchmarking.controllers.matlab_curve_controller)
Implements a controller for MATLAB Curve Fitting Toolbox
*class* fitbenchmarking.controllers.matlab_curve_controller.MatlabCurveController(*cost_func*)[](#fitbenchmarking.controllers.matlab_curve_controller.MatlabCurveController)
Bases: [`fitbenchmarking.controllers.matlab_mixin.MatlabMixin`](index.html#fitbenchmarking.controllers.matlab_mixin.MatlabMixin), [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for MATLAB Curve Fitting Toolbox fitting (fit)
algorithm_check *= {'all': ['Trust-Region', 'Levenberg-Marquardt'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': [], 'gauss_newton': [], 'general': [], 'global_optimization': [], 'levenberg-marquardt': ['Levenberg-Marquardt'], 'ls': ['Trust-Region', 'Levenberg-Marquardt'], 'simplex': [], 'steepest_descent': [], 'trust_region': ['Trust-Region', 'Levenberg-Marquardt']}*[](#fitbenchmarking.controllers.matlab_curve_controller.MatlabCurveController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.matlab_curve_controller.MatlabCurveController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
controller_name *= 'matlab_curve'*[](#fitbenchmarking.controllers.matlab_curve_controller.MatlabCurveController.controller_name)
A name to be used in tables. If this is set to None it will be inferred from the class name.
fit()[](#fitbenchmarking.controllers.matlab_curve_controller.MatlabCurveController.fit)
Run problem with Matlab
incompatible_problems *= ['mantid', 'horace']*[](#fitbenchmarking.controllers.matlab_curve_controller.MatlabCurveController.incompatible_problems)
A list of incompatible problem formats for this controller.
setup()[](#fitbenchmarking.controllers.matlab_curve_controller.MatlabCurveController.setup)
Setup for Matlab fitting
######### fitbenchmarking.controllers.matlab_mixin module[](#module-fitbenchmarking.controllers.matlab_mixin)
Implements mixin class for the matlab fitting software controllers.
*class* fitbenchmarking.controllers.matlab_mixin.MatlabMixin(*cost_func*)[](#fitbenchmarking.controllers.matlab_mixin.MatlabMixin)
Bases: `object`
Mixin class for matlab fitting software controllers
clear_matlab()[](#fitbenchmarking.controllers.matlab_mixin.MatlabMixin.clear_matlab)
Clear the matlab instance, ready for the next setup.
py_to_mat(*func*)[](#fitbenchmarking.controllers.matlab_mixin.MatlabMixin.py_to_mat)
Get the named function from the matlab version of the cost function
Parameters
**func** (*str*) – The name of the function to retrieve
setup_timer()[](#fitbenchmarking.controllers.matlab_mixin.MatlabMixin.setup_timer)
Create an interface into the timer associated with the cost function.
The timer will be created in the matlab engine as “timer” and must not be overridden, although this can be changed in this function if there’s a conflict.
This overrides the controller’s timer so that it can be controlled from other parts of the code.
*class* fitbenchmarking.controllers.matlab_mixin.MatlabTimerInterface(*timer*)[](#fitbenchmarking.controllers.matlab_mixin.MatlabTimerInterface)
Bases: `object`
A timer to convert from matlab to the python timer.
######### fitbenchmarking.controllers.matlab_opt_controller module[](#module-fitbenchmarking.controllers.matlab_opt_controller)
Implements a controller for MATLAB Optimization Toolbox
*class* fitbenchmarking.controllers.matlab_opt_controller.MatlabOptController(*cost_func*)[](#fitbenchmarking.controllers.matlab_opt_controller.MatlabOptController)
Bases: [`fitbenchmarking.controllers.matlab_mixin.MatlabMixin`](index.html#fitbenchmarking.controllers.matlab_mixin.MatlabMixin), [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for MATLAB Optimization Toolbox, implementing lsqcurvefit
algorithm_check *= {'all': ['levenberg-marquardt', 'trust-region-reflective'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': [], 'gauss_newton': [], 'general': [], 'global_optimization': [], 'levenberg-marquardt': ['levenberg-marquardt'], 'ls': ['levenberg-marquardt', 'trust-region-reflective'], 'simplex': [], 'steepest_descent': [], 'trust_region': ['levenberg-marquardt', 'trust-region-reflective']}*[](#fitbenchmarking.controllers.matlab_opt_controller.MatlabOptController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.matlab_opt_controller.MatlabOptController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
controller_name *= 'matlab_opt'*[](#fitbenchmarking.controllers.matlab_opt_controller.MatlabOptController.controller_name)
A name to be used in tables. If this is set to None it will be inferred from the class name.
fit()[](#fitbenchmarking.controllers.matlab_opt_controller.MatlabOptController.fit)
Run problem with Matlab Optimization Toolbox
incompatible_problems *= ['mantid']*[](#fitbenchmarking.controllers.matlab_opt_controller.MatlabOptController.incompatible_problems)
A list of incompatible problem formats for this controller.
jacobian_enabled_solvers *= ['levenberg-marquardt', 'trust-region-reflective']*[](#fitbenchmarking.controllers.matlab_opt_controller.MatlabOptController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.matlab_opt_controller.MatlabOptController.setup)
Setup for Matlab Optimization Toolbox fitting
######### fitbenchmarking.controllers.matlab_stats_controller module[](#module-fitbenchmarking.controllers.matlab_stats_controller)
Implements a controller for MATLAB Statistics Toolbox
*class* fitbenchmarking.controllers.matlab_stats_controller.MatlabStatsController(*cost_func*)[](#fitbenchmarking.controllers.matlab_stats_controller.MatlabStatsController)
Bases: [`fitbenchmarking.controllers.matlab_mixin.MatlabMixin`](index.html#fitbenchmarking.controllers.matlab_mixin.MatlabMixin), [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for MATLAB Statistics Toolbox fitting (nlinfit)
algorithm_check *= {'all': ['Levenberg-Marquardt'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': [], 'gauss_newton': [], 'general': [], 'global_optimization': [], 'levenberg-marquardt': ['Levenberg-Marquardt'], 'ls': ['Levenberg-Marquardt'], 'simplex': [], 'steepest_descent': [], 'trust_region': ['Levenberg-Marquardt']}*[](#fitbenchmarking.controllers.matlab_stats_controller.MatlabStatsController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.matlab_stats_controller.MatlabStatsController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
controller_name *= 'matlab_stats'*[](#fitbenchmarking.controllers.matlab_stats_controller.MatlabStatsController.controller_name)
A name to be used in tables. If this is set to None it will be inferred from the class name.
fit()[](#fitbenchmarking.controllers.matlab_stats_controller.MatlabStatsController.fit)
Run problem with Matlab Statistics Toolbox
incompatible_problems *= ['mantid']*[](#fitbenchmarking.controllers.matlab_stats_controller.MatlabStatsController.incompatible_problems)
A list of incompatible problem formats for this controller.
setup()[](#fitbenchmarking.controllers.matlab_stats_controller.MatlabStatsController.setup)
Setup for Matlab fitting
######### fitbenchmarking.controllers.minuit_controller module[](#module-fitbenchmarking.controllers.minuit_controller)
Implements a controller for the CERN package Minuit
<https://seal.web.cern.ch/seal/snapshot/work-packages/mathlibs/minuit/>
using the iminuit python interface
<http://iminuit.readthedocs.org*class* fitbenchmarking.controllers.minuit_controller.MinuitController(*cost_func*)[](#fitbenchmarking.controllers.minuit_controller.MinuitController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the Minuit fitting software
algorithm_check *= {'all': ['migrad', 'simplex'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': ['simplex'], 'gauss_newton': [], 'general': ['migrad'], 'global_optimization': [], 'levenberg-marquardt': [], 'ls': [], 'simplex': ['simplex'], 'steepest_descent': [], 'trust_region': []}*[](#fitbenchmarking.controllers.minuit_controller.MinuitController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.minuit_controller.MinuitController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from
fit()[](#fitbenchmarking.controllers.minuit_controller.MinuitController.fit)
Run problem with Minuit
setup()[](#fitbenchmarking.controllers.minuit_controller.MinuitController.setup)
Setup for Minuit
######### fitbenchmarking.controllers.nlopt_controller module[](#module-fitbenchmarking.controllers.nlopt_controller)
Implements a controller for the NLOPT software.
*class* fitbenchmarking.controllers.nlopt_controller.NLoptController(*cost_func*)[](#fitbenchmarking.controllers.nlopt_controller.NLoptController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for NLOPT
algorithm_check *= {'all': ['LN_BOBYQA', 'LN_NEWUOA', 'LN_NEWUOA_BOUND', 'LN_PRAXIS', 'LD_SLSQP', 'LD_VAR2', 'LD_VAR1', 'AUGLAG', 'AUGLAG_EQ', 'LN_NELDERMEAD', 'LN_SBPLX', 'LN_COBYLA', 'LD_CCSAQ', 'LD_MMA', 'LD_TNEWTON_PRECOND_RESTART', 'LD_TNEWTON_PRECOND', 'LD_TNEWTON_RESTART', 'LD_TNEWTON', 'LD_LBFGS', 'GN_DIRECT', 'GN_DIRECT_L', 'GN_DIRECT_L_RAND', 'GNL_DIRECT_NOSCAL', 'GN_DIRECT_L_NOSCAL', 'GN_DIRECT_L_RAND_NOSCAL', 'GN_ORIG_DIRECT', 'GN_ORIG_DIRECT_L', 'GN_CRS2_LM', 'G_MLSL_LDS', 'G_MLSL', 'GD_STOGO', 'GD_STOGO_RAND', 'GN_AGS', 'GN_ISRES'], 'bfgs': ['LD_LBFGS'], 'conjugate_gradient': ['LN_COBYLA'], 'deriv_free': ['LN_BOBYQA', 'LN_NEWUOA', 'LN_NEWUOA_BOUND', 'LN_PRAXIS'], 'gauss_newton': ['LD_TNEWTON_PRECOND_RESTART', 'LD_TNEWTON_PRECOND', 'LD_TNEWTON_RESTART', 'LD_TNEWTON'], 'general': ['LD_SLSQP', 'LD_VAR2', 'LD_VAR1', 'AUGLAG', 'AUGLAG_EQ'], 'global_optimization': ['GN_DIRECT', 'GN_DIRECT_L', 'GN_DIRECT_L_RAND', 'GNL_DIRECT_NOSCAL', 'GN_DIRECT_L_NOSCAL', 'GN_DIRECT_L_RAND_NOSCAL', 'GN_ORIG_DIRECT', 'GN_ORIG_DIRECT_L', 'GN_CRS2_LM', 'G_MLSL_LDS', 'G_MLSL', 'GD_STOGO', 'GD_STOGO_RAND', 'GN_AGS', 'GN_ISRES'], 'levenberg-marquardt': [], 'ls': [], 'simplex': ['LN_NELDERMEAD', 'LN_SBPLX'], 'steepest_descent': [], 'trust_region': ['LN_COBYLA', 'LD_CCSAQ', 'LD_MMA']}*[](#fitbenchmarking.controllers.nlopt_controller.NLoptController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.nlopt_controller.NLoptController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from
fit()[](#fitbenchmarking.controllers.nlopt_controller.NLoptController.fit)
Run problem with NLOPT
jacobian_enabled_solvers *= ['LD_SLSQP', 'LD_VAR2', 'LD_VAR1', 'LD_CCSAQ', 'LD_MMA', 'LD_TNEWTON_PRECOND_RESTART', 'LD_TNEWTON_PRECOND', 'LD_TNEWTON_RESTART', 'LD_TNEWTON', 'GD_STOGO', 'GD_STOGO_RAND']*[](#fitbenchmarking.controllers.nlopt_controller.NLoptController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
objective_master_nlopt(*x*, *grad*)[](#fitbenchmarking.controllers.nlopt_controller.NLoptController.objective_master_nlopt)
NLOPT objective function
setup()[](#fitbenchmarking.controllers.nlopt_controller.NLoptController.setup)
Setup problem ready to be run with NLOPT
######### fitbenchmarking.controllers.ralfit_controller module[](#module-fitbenchmarking.controllers.ralfit_controller)
Implements a controller for RALFit
<https://github.com/ralna/RALFit*class* fitbenchmarking.controllers.ralfit_controller.RALFitController(*cost_func*)[](#fitbenchmarking.controllers.ralfit_controller.RALFitController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the RALFit fitting software.
algorithm_check *= {'all': ['gn', 'hybrid', 'newton', 'newton-tensor', 'gn_reg', 'hybrid_reg', 'newton_reg', 'newton-tensor_reg'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': [], 'gauss_newton': ['gn', 'gn_reg'], 'general': [], 'global_optimization': [], 'levenberg-marquardt': ['gn', 'gn_reg'], 'ls': ['gn', 'hybrid', 'newton', 'newton-tensor', 'gn_reg', 'hybrid_reg', 'newton_reg', 'newton-tensor_reg'], 'simplex': [], 'steepest_descent': [], 'trust_region': ['gn', 'hybrid', 'newton', 'newton-tensor']}*[](#fitbenchmarking.controllers.ralfit_controller.RALFitController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.ralfit_controller.RALFitController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
fit()[](#fitbenchmarking.controllers.ralfit_controller.RALFitController.fit)
Run problem with RALFit.
hes_eval(*params*, *r*)[](#fitbenchmarking.controllers.ralfit_controller.RALFitController.hes_eval)
> Function to ensure correct inputs and outputs
> are used for the RALFit hessian evaluation
> param params
> parameters
> type params
> numpy array
> param r
> residuals, required by RALFit to
> be passed for hessian evaluation
> type r
> numpy array
> return
> hessian 2nd order term: sum_{i=1}^m r_i
abla^2 r_i rtype
numpy array
hessian_enabled_solvers *= ['hybrid', 'newton', 'newton-tensor', 'hybrid_reg', 'newton_reg', 'newton-tensor_reg']*[](#fitbenchmarking.controllers.ralfit_controller.RALFitController.hessian_enabled_solvers)
Within the controller class, you must define the list
`hessian_enabled_solvers` if any of the minimizers for the specific software are able to use hessian information.
* `hessian_enabled_solvers`: a list of minimizers in a specific
software that allow Hessian information to be passed into the fitting algorithm
jacobian_enabled_solvers *= ['gn', 'hybrid', 'newton', 'newton-tensor', 'gn_reg', 'hybrid_reg', 'newton_reg', 'newton-tensor_reg']*[](#fitbenchmarking.controllers.ralfit_controller.RALFitController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.ralfit_controller.RALFitController.setup)
Setup for RALFit
######### fitbenchmarking.controllers.scipy_controller module[](#module-fitbenchmarking.controllers.scipy_controller)
Implements a controller for the scipy fitting software.
In particular, here for the scipy minimize solver for general minimization problems.
*class* fitbenchmarking.controllers.scipy_controller.ScipyController(*cost_func*)[](#fitbenchmarking.controllers.scipy_controller.ScipyController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the Scipy fitting software.
algorithm_check *= {'all': ['Nelder-Mead', 'Powell', 'CG', 'BFGS', 'Newton-CG', 'L-BFGS-B', 'TNC', 'SLSQP', 'COBYLA', 'trust-ncg', 'trust-exact', 'trust-krylov', 'trust-constr', 'dogleg'], 'bfgs': ['BFGS', 'L-BFGS-B'], 'conjugate_gradient': ['CG', 'Newton-CG', 'Powell'], 'deriv_free': ['Nelder-Mead', 'Powell', 'COBYLA'], 'gauss_newton': [], 'general': ['Nelder-Mead', 'Powell', 'CG', 'BFGS', 'Newton-CG', 'L-BFGS-B', 'TNC', 'SLSQP'], 'global_optimization': [], 'levenberg-marquardt': [], 'ls': [None], 'simplex': ['Nelder-Mead'], 'steepest_descent': [], 'trust_region': ['trust-ncg', 'trust-exact', 'trust-krylov', 'trust-constr', 'dogleg']}*[](#fitbenchmarking.controllers.scipy_controller.ScipyController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.scipy_controller.ScipyController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
fit()[](#fitbenchmarking.controllers.scipy_controller.ScipyController.fit)
Run problem with Scipy.
hessian_enabled_solvers *= ['Newton-CG', 'trust-ncg', 'trust-exact', 'trust-krylov', 'trust-constr', 'dogleg']*[](#fitbenchmarking.controllers.scipy_controller.ScipyController.hessian_enabled_solvers)
Within the controller class, you must define the list
`hessian_enabled_solvers` if any of the minimizers for the specific software are able to use hessian information.
* `hessian_enabled_solvers`: a list of minimizers in a specific
software that allow Hessian information to be passed into the fitting algorithm
jacobian_enabled_solvers *= ['CG', 'BFGS', 'Newton-CG', 'L-BFGS-B', 'TNC', 'SLSQP', 'trust-ncg', 'trust-exact', 'trust-krylov', 'trust-constr', 'dogleg']*[](#fitbenchmarking.controllers.scipy_controller.ScipyController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.scipy_controller.ScipyController.setup)
Setup problem ready to be run with SciPy
######### fitbenchmarking.controllers.scipy_go_controller module[](#module-fitbenchmarking.controllers.scipy_go_controller)
Implements a controller for the scipy fitting software.
In particular, here for the scipy minimize solver for general minimization problems.
*class* fitbenchmarking.controllers.scipy_go_controller.ScipyGOController(*cost_func*)[](#fitbenchmarking.controllers.scipy_go_controller.ScipyGOController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the Scipy fitting software.
algorithm_check *= {'all': ['differential_evolution', 'shgo', 'dual_annealing'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': ['differential_evolution'], 'gauss_newton': [], 'general': ['differential_evolution', 'shgo', 'dual_annealing'], 'global_optimization': ['differential_evolution', 'shgo', 'dual_annealing'], 'levenberg-marquardt': [], 'ls': [None], 'simplex': [], 'steepest_descent': [], 'trust_region': []}*[](#fitbenchmarking.controllers.scipy_go_controller.ScipyGOController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.scipy_go_controller.ScipyGOController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
controller_name *= 'scipy_go'*[](#fitbenchmarking.controllers.scipy_go_controller.ScipyGOController.controller_name)
A name to be used in tables. If this is set to None it will be inferred from the class name.
fit()[](#fitbenchmarking.controllers.scipy_go_controller.ScipyGOController.fit)
Run problem with Scipy GO.
jacobian_enabled_solvers *= ['shgo', 'dual_annealing']*[](#fitbenchmarking.controllers.scipy_go_controller.ScipyGOController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.scipy_go_controller.ScipyGOController.setup)
Setup problem ready to be run with SciPy GO
######### fitbenchmarking.controllers.scipy_ls_controller module[](#module-fitbenchmarking.controllers.scipy_ls_controller)
Implements a controller for the scipy ls fitting software.
In particular, for the scipy least_squares solver.
*class* fitbenchmarking.controllers.scipy_ls_controller.ScipyLSController(*cost_func*)[](#fitbenchmarking.controllers.scipy_ls_controller.ScipyLSController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for the Scipy Least-Squares fitting software.
algorithm_check *= {'all': ['lm-scipy', 'trf', 'dogbox'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': [None], 'gauss_newton': [], 'general': [None], 'global_optimization': [], 'levenberg-marquardt': ['lm-scipy'], 'ls': ['lm-scipy', 'trf', 'dogbox'], 'simplex': [], 'steepest_descent': [], 'trust_region': ['lm-scipy', 'trf', 'dogbox']}*[](#fitbenchmarking.controllers.scipy_ls_controller.ScipyLSController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.scipy_ls_controller.ScipyLSController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from.
controller_name *= 'scipy_ls'*[](#fitbenchmarking.controllers.scipy_ls_controller.ScipyLSController.controller_name)
A name to be used in tables. If this is set to None it will be inferred from the class name.
fit()[](#fitbenchmarking.controllers.scipy_ls_controller.ScipyLSController.fit)
Run problem with Scipy LS.
jacobian_enabled_solvers *= ['lm-scipy', 'trf', 'dogbox']*[](#fitbenchmarking.controllers.scipy_ls_controller.ScipyLSController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.scipy_ls_controller.ScipyLSController.setup)
Setup problem ready to be run with SciPy LS
######### fitbenchmarking.controllers.theseus_controller module[](#module-fitbenchmarking.controllers.theseus_controller)
Implements a controller for the Theseus fitting software.
*class* fitbenchmarking.controllers.theseus_controller.TheseusController(*cost_func*)[](#fitbenchmarking.controllers.theseus_controller.TheseusController)
Bases: [`fitbenchmarking.controllers.base_controller.Controller`](index.html#fitbenchmarking.controllers.base_controller.Controller)
Controller for Theseus
algorithm_check *= {'all': ['Levenberg_Marquardt', 'Gauss-Newton'], 'bfgs': [], 'conjugate_gradient': [], 'deriv_free': [], 'gauss_newton': ['Gauss-Newton'], 'general': [], 'global_optimization': [], 'levenberg-marquardt': ['Levenberg_Marquardt'], 'ls': ['Levenberg_Marquardt', 'Gauss-Newton'], 'simplex': [], 'steepest_descent': [], 'trust_region': []}*[](#fitbenchmarking.controllers.theseus_controller.TheseusController.algorithm_check)
Within the controller class, you must initialize a dictionary, `algorithm_check`,
such that the **keys** are given by:
> * `all` - all minimizers
> * `ls` - least-squares fitting algorithms
> * `deriv_free` - derivative free algorithms (these are algorithms
> that cannot use information about derivatives – e.g., the
> `Simplex` method in `Mantid`)
> * `general` - minimizers which solve a generic min f(x)
> * `simplex` - derivative free simplex based algorithms
> e.g. Nelder-Mead
> * `trust_region` - algorithms which emply a trust region approach
> * `levenberg-marquardt` - minimizers that use the
> Levenberg-Marquardt algorithm
> * `gauss_newton` - minimizers that use the Gauss Newton algorithm
> * `bfgs` - minimizers that use the BFGS algorithm
> * `conjugate_gradient` - Conjugate Gradient algorithms
> * `steepest_descent` - Steepest Descent algorithms
> * `global_optimization` - Global Optimization algorithms
The **values** of the dictionary are given as a list of minimizers for that specific controller that fit into each of the above categories. See for example the `GSL` controller.
cleanup()[](#fitbenchmarking.controllers.theseus_controller.TheseusController.cleanup)
Convert the result to a numpy array and populate the variables results will be read from
fit()[](#fitbenchmarking.controllers.theseus_controller.TheseusController.fit)
Run problem with Theseus
jacobian_enabled_solvers *= ['Levenberg_Marquardt', 'Gauss-Newton']*[](#fitbenchmarking.controllers.theseus_controller.TheseusController.jacobian_enabled_solvers)
Within the controller class, you must define the list
`jacobian_enabled_solvers` if any of the minimizers for the specific software are able to use jacobian information.
* `jacobian_enabled_solvers`: a list of minimizers in a specific
software that allow Jacobian information to be passed into the fitting algorithm
setup()[](#fitbenchmarking.controllers.theseus_controller.TheseusController.setup)
Setup problem ready to be run with Theseus
*class* fitbenchmarking.controllers.theseus_controller.TheseusCostFunction(**args: Any*, ***kwargs: Any*)[](#fitbenchmarking.controllers.theseus_controller.TheseusCostFunction)
Bases: `theseus.CostFunction`
Cost function for Theseus ai
dim() → int[](#fitbenchmarking.controllers.theseus_controller.TheseusCostFunction.dim)
Lenght of x data
error() → Tuple[List[torch.Tensor], List[float]][](#fitbenchmarking.controllers.theseus_controller.TheseusCostFunction.error)
Resdiuals in pytorch tensor form for Theseus ai
jacobians() → Tuple[List[torch.Tensor], torch.Tensor][](#fitbenchmarking.controllers.theseus_controller.TheseusCostFunction.jacobians)
Jacobians in pytorch tensor form for Theseus ai
######## Module contents[](#module-fitbenchmarking.controllers)
####### fitbenchmarking.core package[](#fitbenchmarking-core-package)
######## Submodules[](#submodules)
######### fitbenchmarking.core.fitting_benchmarking module[](#fitbenchmarking-core-fitting-benchmarking-module)
######### fitbenchmarking.core.results_output module[](#module-fitbenchmarking.core.results_output)
Functions that create the tables, support pages, figures, and indexes.
fitbenchmarking.core.results_output.create_directories(*options*, *group_name*)[](#fitbenchmarking.core.results_output.create_directories)
Create the directory structure ready to store the results
Parameters
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The options used in the fitting problem and plotting
* **group_name** (*str*) – name of the problem group
Returns paths to the top level group results, support pages,
and figures directories
Return type
(str, str, str)
fitbenchmarking.core.results_output.create_index_page(*options: [Options](index.html#fitbenchmarking.utils.options.Options)*, *groups: list[str]*, *result_directories: list[str]*) → str[](#fitbenchmarking.core.results_output.create_index_page)
Creates the results index page for the benchmark, and copies the fonts and js directories to the correct location.
Parameters
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The user options for the benchmark.
* **groups** (*A list of strings.*) – Names for each of the problem set groups.
* **result_directories** – Result directory paths for each
problem set group.
:type result_directories: A list of strings.
:return: The filepath of the results_index.html file.
:rtype: str
fitbenchmarking.core.results_output.create_plots(*options*, *results*, *best_results*, *figures_dir*)[](#fitbenchmarking.core.results_output.create_plots)
Create a plot for each result and store in the figures directory
Parameters
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The options used in the fitting problem and plotting
* **results** (*list of list of fitbenchmarking.utils.fitbm_result.FittingResult*) – results nested array of objects
* **best_results** (*list**[**dict**[**str: fitbenchmarking.utils.fitbm_result.FittingResult**]**]*) – best result for each problem seperated by cost function
* **figures_dir** (*str*) – Path to directory to store the figures in
fitbenchmarking.core.results_output.create_problem_level_index(*options*, *table_names*, *group_name*, *group_dir*, *table_descriptions*)[](#fitbenchmarking.core.results_output.create_problem_level_index)
Generates problem level index page.
Parameters
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The options used in the fitting problem and plotting
* **table_names** (*list*) – list of table names
* **group_name** (*str*) – name of the problem group
* **group_dir** (*str*) – Path to the directory where the index should be stored
* **table_descriptions** (*dict*) – dictionary containing descriptions of the tables and the comparison mode
fitbenchmarking.core.results_output.open_browser(*output_file: str*, *options*) → None[](#fitbenchmarking.core.results_output.open_browser)
Opens a browser window to show the results of a fit benchmark.
Parameters
**output_file** (*str*) – The absolute path to the results index file.
fitbenchmarking.core.results_output.preprocess_data(*results: list[[FittingResult](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)]*)[](#fitbenchmarking.core.results_output.preprocess_data)
Generate a dictionary of results lists sorted into the correct order with rows and columns as the key and list elements respectively.
This is used to create HTML and txt tables.
This is stored in self.sorted_results
Parameters
**results** (*list**[*[*fitbenchmarking.utils.fitbm_result.FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]*) – The results to process
Returns The best result grouped by row and category (cost function),
The sorted results grouped by row and category
Return type dict[str, dict[str, [utils.fitbm_result.FittingResult](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)]],
dict[str, dict[str, list[[utils.fitbm_result.FittingResult](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)]]]
fitbenchmarking.core.results_output.save_results(*options*, *results*, *group_name*, *failed_problems*, *unselected_minimizers*)[](#fitbenchmarking.core.results_output.save_results)
Create all results files and store them.
Result files are plots, support pages, tables, and index pages.
Parameters
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The options used in the fitting problem and plotting
* **results** (*list**[*[*fitbenchmarking.utils.fitbm_result.FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]*) – The results from fitting
* **group_name** (*str*) – name of the problem group
* **failed_problems** (*list*) – list of failed problems to be reported in the html output
Params unselected_minimizers Dictionary containing unselected minimizers based on the algorithm_type option
Returns Path to directory of group results
Return type str
######## Module contents[](#module-fitbenchmarking.core)
####### fitbenchmarking.cost_func package[](#fitbenchmarking-cost-func-package)
######## Submodules[](#submodules)
######### fitbenchmarking.cost_func.base_cost_func module[](#module-fitbenchmarking.cost_func.base_cost_func)
Implements the base class for the cost function class.
*class* fitbenchmarking.cost_func.base_cost_func.CostFunc(*problem*)[](#fitbenchmarking.cost_func.base_cost_func.CostFunc)
Bases: `object`
Base class for the cost functions.
*abstract* eval_cost(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.base_cost_func.CostFunc.eval_cost)
Evaluate the cost function
Parameters
**params** (*list*) – The parameters to calculate residuals for
Returns evaluated cost function
Return type float
*abstract* hes_cost(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.base_cost_func.CostFunc.hes_cost)
Uses the Hessian of the model to evaluate the Hessian of the cost function, \(\nabla_p^2 F(r(x,y,p))\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Hessians
Returns evaluated Hessian of the cost function
Return type 2D numpy array
*abstract* hes_res(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.base_cost_func.CostFunc.hes_res)
Uses the Hessian of the model to evaluate the Hessian of the cost function residual, \(\nabla_p^2 r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Hessians
Returns evaluated Hessian and Jacobian of the residual at each x, y pair
Return type tuple (list of 2D numpy arrays, list of 1D numpy arrays)
*abstract* jac_cost(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.base_cost_func.CostFunc.jac_cost)
Uses the Jacobian of the model to evaluate the Jacobian of the cost function, \(\nabla_p F(r(x,y,p))\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Jacobians
Returns evaluated Jacobian of the cost function
Return type 1D numpy array
*abstract* jac_res(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.base_cost_func.CostFunc.jac_res)
Uses the Jacobian of the model to evaluate the Jacobian of the cost function residual, \(\nabla_p r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Jacobians
Returns evaluated Jacobian of the residual at each x, y pair
Return type a list of 1D numpy arrays
validate_algorithm_type(*algorithm_check*, *minimizer*)[](#fitbenchmarking.cost_func.base_cost_func.CostFunc.validate_algorithm_type)
Helper function which checks that the algorithm type of the selected minimizer from the options (options.minimizer)
is incompatible with the selected cost function
Parameters
**algorithm_check** – dictionary object containing algorithm
types and minimizers for selected software
:type algorithm_check: dict
:param minimizer: string of minimizers selected from the options
:type minimizer: str
validate_problem()[](#fitbenchmarking.cost_func.base_cost_func.CostFunc.validate_problem)
Helper function to check if the problem definition is compatible with the cost function.
By default do not raise an error.
######### fitbenchmarking.cost_func.cost_func_factory module[](#module-fitbenchmarking.cost_func.cost_func_factory)
This file contains a factory implementation for the available cost functions within FitBenchmarking.
fitbenchmarking.cost_func.cost_func_factory.create_cost_func(*cost_func_type*)[](#fitbenchmarking.cost_func.cost_func_factory.create_cost_func)
Create a cost function class class.
Parameters
**cost_func_type** (*str*) – Type of cost function selected from options
Returns Cost function class for the problem
Return type fitbenchmarking.cost_func.base_cost_func.CostFunc subclass
######### fitbenchmarking.cost_func.hellinger_nlls_cost_func module[](#module-fitbenchmarking.cost_func.hellinger_nlls_cost_func)
Implements the root non-linear least squares cost function
*class* fitbenchmarking.cost_func.hellinger_nlls_cost_func.HellingerNLLSCostFunc(*problem*)[](#fitbenchmarking.cost_func.hellinger_nlls_cost_func.HellingerNLLSCostFunc)
Bases: [`fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc`](index.html#fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc)
This defines the Hellinger non-linear least squares cost function where,
given a set of \(n\) data points \((x_i,y_i)\), associated errors
\(e_i\), and a model function \(f(x,p)\), we find the optimal parameters in the Hellinger least-squares sense by solving:
\[\min_p \sum_{i=1}^n
\left(\sqrt{y_i} - \sqrt{f(x_i, p})\right)^2\]
where \(p\) is a vector of length \(m\), and we start from a given initial guess for the optimal parameters. More information on non-linear least squares cost functions can be found
[here](https://en.wikipedia.org/wiki/Non-linear_least_squares) and for the Hellinger distance measure see
[here](https://en.wikipedia.org/wiki/Hellinger_distance).
eval_r(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.hellinger_nlls_cost_func.HellingerNLLSCostFunc.eval_r)
Calculate the residuals, \(\sqrt{y_i} - \sqrt{f(x_i, p)}\)
Parameters
**params** (*list*) – The parameters, \(p\), to calculate residuals for
Returns The residuals for the datapoints at the given parameters
Return type numpy array
hes_res(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.hellinger_nlls_cost_func.HellingerNLLSCostFunc.hes_res)
Uses the Hessian of the model to evaluate the Hessian of the cost function residual, \(\nabla_p^2 r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Hessians
Returns evaluated Hessian and Jacobian of the residual at each x, y pair
Return type tuple (list of 2D numpy arrays, list of 1D numpy arrays)
jac_res(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.hellinger_nlls_cost_func.HellingerNLLSCostFunc.jac_res)
Uses the Jacobian of the model to evaluate the Jacobian of the cost function residual, \(\nabla_p r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Jacobians
Returns evaluated Jacobian of the residual at each x, y pair
Return type a list of 1D numpy arrays
validate_problem()[](#fitbenchmarking.cost_func.hellinger_nlls_cost_func.HellingerNLLSCostFunc.validate_problem)
Validate the problem for the Hellinger Cost Function.
Hellinger involves a square root so will fail on negative inputs.
Raises:
IncompatibleCostFunctionError: When the problem has negativevalues.
######### fitbenchmarking.cost_func.nlls_base_cost_func module[](#module-fitbenchmarking.cost_func.nlls_base_cost_func)
Implements the base non-linear least squares cost function
*class* fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc(*problem*)[](#fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc)
Bases: [`fitbenchmarking.cost_func.base_cost_func.CostFunc`](index.html#fitbenchmarking.cost_func.base_cost_func.CostFunc)
This defines a base cost function for objectives of the type
\[\min_p \sum_{i=1}^n r(y_i, x_i, p)^2\]
where \(p\) is a vector of length \(m\), and we start from a given initial guess for the optimal parameters.
eval_cost(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc.eval_cost)
Evaluate the square of the L2 norm of the residuals,
\(\sum_i r(x_i,y_i,p)^2\)
at the given parameters
Parameters
**params** (*list*) – The parameters, \(p\), to calculate residuals for
Returns The sum of squares of residuals for the datapoints at the given parameters
Return type numpy array
*abstract* eval_r(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc.eval_r)
Calculate residuals used in Least-Squares problems
Parameters
**params** (*list*) – The parameters to calculate residuals for
Returns The residuals for the datapoints at the given parameters
Return type numpy array
hes_cost(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc.hes_cost)
Uses the Hessian of the model to evaluate the Hessian of the cost function, \(\nabla_p^2 F(r(x,y,p))\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Hessians
Returns evaluated Hessian of the cost function
Return type 2D numpy array
jac_cost(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc.jac_cost)
Uses the Jacobian of the model to evaluate the Jacobian of the cost function, \(\nabla_p F(r(x,y,p))\), at the given parameters.
:param params: The parameters at which to calculate Jacobians
:type params: list
:return: evaluated Jacobian of the cost function
:rtype: 1D numpy array
######### fitbenchmarking.cost_func.nlls_cost_func module[](#module-fitbenchmarking.cost_func.nlls_cost_func)
Implements the non-weighted non-linear least squares cost function
*class* fitbenchmarking.cost_func.nlls_cost_func.NLLSCostFunc(*problem*)[](#fitbenchmarking.cost_func.nlls_cost_func.NLLSCostFunc)
Bases: [`fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc`](index.html#fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc)
This defines the non-linear least squares cost function where, given a set of \(n\) data points \((x_i,y_i)\), associated errors \(e_i\),
and a model function \(f(x,p)\), we find the optimal parameters in the root least-squares sense by solving:
\[\min_p \sum_{i=1}^n \left(y_i - f(x_i, p)\right)^2\]
where \(p\) is a vector of length \(m\), and we start from a given initial guess for the optimal parameters. More information on non-linear least squares cost functions can be found
[here](https://en.wikipedia.org/wiki/Non-linear_least_squares).
eval_r(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.nlls_cost_func.NLLSCostFunc.eval_r)
Calculate the residuals, \(y_i - f(x_i, p)\)
Parameters
**params** (*list*) – The parameters, \(p\), to calculate residuals for
Returns The residuals for the datapoints at the given parameters
Return type numpy array
hes_res(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.nlls_cost_func.NLLSCostFunc.hes_res)
Uses the Hessian of the model to evaluate the Hessian of the cost function residual, \(\nabla_p^2 r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Hessians
Returns evaluated Hessian and Jacobian of the residual at each x, y pair
Return type tuple (list of 2D numpy arrays, list of 1D numpy arrays)
jac_res(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.nlls_cost_func.NLLSCostFunc.jac_res)
Uses the Jacobian of the model to evaluate the Jacobian of the cost function residual, \(\nabla_p r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Jacobians
Returns evaluated Jacobian of the residual at each x, y pair
Return type a list of 1D numpy arrays
######### fitbenchmarking.cost_func.poisson_cost_func module[](#module-fitbenchmarking.cost_func.poisson_cost_func)
Implements a Poisson deviance cost function based on Mantid’s:
<https://docs.mantidproject.org/nightly/fitting/fitcostfunctions/Poisson.html*class* fitbenchmarking.cost_func.poisson_cost_func.PoissonCostFunc(*problem*)[](#fitbenchmarking.cost_func.poisson_cost_func.PoissonCostFunc)
Bases: [`fitbenchmarking.cost_func.base_cost_func.CostFunc`](index.html#fitbenchmarking.cost_func.base_cost_func.CostFunc)
This defines the Poisson deviance cost-function where,
given the set of \(n\) data points \((x_i, y_i)\),
and a model function \(f(x,p)\), we find the optimal parameters in the Poisson deviance sense by solving:
\[\min_p \sum_{i=1}^n
\left( y_i \left(\log{y_i} - \log{f(x_i, p)} \right)
- \left( y_i - f(x_i, p) \right) \right)\]
where \(p\) is a vector of length \(m\), and we start from a given initial guess for the optimal parameters.
This cost function is intended for positive values.
This cost function is not a least squares problem and as such will not work with least squares minimizers. Please use algorithm_type to select general solvers.
See options docs ([Fitting Options](index.html#fitting-option)) for information on how to do this.
eval_cost(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.poisson_cost_func.PoissonCostFunc.eval_cost)
Evaluate the Poisson deviance cost function
Parameters
* **params** (*list*) – The parameters to calculate residuals for
* **x** (*np.array* *(**optional**)*) – The x values to evaluate at. Default is self.problem.data_x
* **y** (*np.array* *(**optional**)*) – The y values to evaluate at. Default is self.problem.data_y
Returns evaluated cost function
Return type float
hes_cost(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.poisson_cost_func.PoissonCostFunc.hes_cost)
Uses the Hessian of the model to evaluate the Hessian of the cost function, \(\nabla_p^2 F(r(x,y,p))\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Hessians
Returns evaluated Hessian of the cost function
Return type 2D numpy array
hes_res(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.poisson_cost_func.PoissonCostFunc.hes_res)
Uses the Hessian of the model to evaluate the Hessian of the cost function residual, \(\nabla_p^2 r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Hessians
Returns evaluated Hessian and Jacobian of the residual at each x, y pair
Return type tuple (list of 2D numpy arrays, list of 1D numpy arrays)
jac_cost(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.poisson_cost_func.PoissonCostFunc.jac_cost)
Uses the Jacobian of the model to evaluate the Jacobian of the cost function, \(\nabla_p F(r(x,y,p))\), at the given parameters.
:param params: The parameters at which to calculate Jacobians
:type params: list
:return: evaluated Jacobian of the cost function
:rtype: 1D numpy array
jac_res(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.poisson_cost_func.PoissonCostFunc.jac_res)
Uses the Jacobian of the model to evaluate the Jacobian of the cost function residual, \(\nabla_p r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Jacobians
Returns evaluated Jacobian of the residual at each x, y pair
Return type a list of 1D numpy arrays
validate_problem()[](#fitbenchmarking.cost_func.poisson_cost_func.PoissonCostFunc.validate_problem)
Validate the problem for the Poisson Cost Function.
Poisson involves a log so will fail on negative inputs.
Raises:
IncompatibleCostFunctionError: When the problem has negativevalues.
######### fitbenchmarking.cost_func.weighted_nlls_cost_func module[](#module-fitbenchmarking.cost_func.weighted_nlls_cost_func)
Implements the weighted non-linear least squares cost function
*class* fitbenchmarking.cost_func.weighted_nlls_cost_func.WeightedNLLSCostFunc(*problem*)[](#fitbenchmarking.cost_func.weighted_nlls_cost_func.WeightedNLLSCostFunc)
Bases: [`fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc`](index.html#fitbenchmarking.cost_func.nlls_base_cost_func.BaseNLLSCostFunc)
This defines the weighted non-linear least squares cost function where,
given a set of \(n\) data points \((x_i,y_i)\), associated errors
\(e_i\), and a model function \(f(x,p)\), we find the optimal parameters in the root least-squares sense by solving:
\[\min_p \sum_{i=1}^n
\left(\frac{y_i - f(x_i, p)}{e_i}\right)^2\]
where \(p\) is a vector of length \(m\), and we start from a given initial guess for the optimal parameters. More information on non-linear least squares cost functions can be found
[here](https://en.wikipedia.org/wiki/Non-linear_least_squares).
eval_r(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.weighted_nlls_cost_func.WeightedNLLSCostFunc.eval_r)
Calculate the residuals, \(\frac{y_i - f(x_i, p)}{e_i}\)
Parameters
**params** (*list*) – The parameters, \(p\), to calculate residuals for
Returns The residuals for the data points at the given parameters
Return type numpy array
hes_res(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.weighted_nlls_cost_func.WeightedNLLSCostFunc.hes_res)
Uses the Hessian of the model to evaluate the Hessian of the cost function residual, \(\nabla_p^2 r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Hessians
Returns evaluated Hessian and Jacobian of the residual at each x, y pair
Return type tuple (list of 2D numpy arrays, list of 1D numpy arrays)
jac_res(*params*, ***kwargs*)[](#fitbenchmarking.cost_func.weighted_nlls_cost_func.WeightedNLLSCostFunc.jac_res)
Uses the Jacobian of the model to evaluate the Jacobian of the cost function residual, \(\nabla_p r(x,y,p)\), at the given parameters.
Parameters
**params** (*list*) – The parameters at which to calculate Jacobians
Returns evaluated Jacobian of the residual at each x, y pair
Return type a list of 1D numpy arrays
######## Module contents[](#module-fitbenchmarking.cost_func)
####### fitbenchmarking.hessian package[](#fitbenchmarking-hessian-package)
######## Submodules[](#submodules)
######### fitbenchmarking.hessian.analytic_hessian module[](#module-fitbenchmarking.hessian.analytic_hessian)
Module which acts as an analytic Hessian calculator
*class* fitbenchmarking.hessian.analytic_hessian.Analytic(*problem*, *jacobian*)[](#fitbenchmarking.hessian.analytic_hessian.Analytic)
Bases: [`fitbenchmarking.hessian.base_hessian.Hessian`](index.html#fitbenchmarking.hessian.base_hessian.Hessian)
Class to apply an analytic Hessian
eval(*params*, ***kwargs*)[](#fitbenchmarking.hessian.analytic_hessian.Analytic.eval)
> Evaluates Hessian of problem.eval_model, returning the value
abla^2_p f(x, p)
> param params
> The parameter values to find the Hessian at
> type params
> list
> return
> Approximation of the Hessian
> rtype
> 3D numpy array
name() → str[](#fitbenchmarking.hessian.analytic_hessian.Analytic.name)
Get a name for the current status of the jacobian.
Returns A unique name for this jacobian/method combination
Return type str
######### fitbenchmarking.hessian.base_hessian module[](#module-fitbenchmarking.hessian.base_hessian)
Implements the base class for the Hessian.
*class* fitbenchmarking.hessian.base_hessian.Hessian(*problem*, *jacobian*)[](#fitbenchmarking.hessian.base_hessian.Hessian)
Bases: `object`
Base class for Hessian.
INCOMPATIBLE_PROBLEMS *= {}*[](#fitbenchmarking.hessian.base_hessian.Hessian.INCOMPATIBLE_PROBLEMS)
*abstract* eval(*params*, ***kwargs*)[](#fitbenchmarking.hessian.base_hessian.Hessian.eval)
Evaluates Hessian of the model
Parameters
**params** (*list*) – The parameter values to find the Hessian at
Returns Computed Hessian
Return type 3D numpy array
*property* method[](#fitbenchmarking.hessian.base_hessian.Hessian.method)
Utility function to get the numerical method
Returns the names of the parameters
Return type list of str
name() → str[](#fitbenchmarking.hessian.base_hessian.Hessian.name)
Get a name for the current status of the hessian.
Returns A unique name for this hessian/method combination
Return type str
######### fitbenchmarking.hessian.hessian_factory module[](#module-fitbenchmarking.hessian.hessian_factory)
This file contains a factory implementation for the Hessians.
This is used to manage the imports and reduce effort in adding new Hessian methods.
fitbenchmarking.hessian.hessian_factory.create_hessian(*hes_method*)[](#fitbenchmarking.hessian.hessian_factory.create_hessian)
Create a Hessian class.
Parameters
**hes_method** (*str*) – Type of Hessian selected from options
Returns Controller class for the problem
Return type fitbenchmarking.hessian.base_controller.Hessian subclass
######### fitbenchmarking.hessian.numdifftools_hessian module[](#fitbenchmarking-hessian-numdifftools-hessian-module)
######### fitbenchmarking.hessian.scipy_hessian module[](#module-fitbenchmarking.hessian.scipy_hessian)
Module which calculates SciPy finite difference approximations
*class* fitbenchmarking.hessian.scipy_hessian.Scipy(*problem*, *jacobian*)[](#fitbenchmarking.hessian.scipy_hessian.Scipy)
Bases: [`fitbenchmarking.hessian.base_hessian.Hessian`](index.html#fitbenchmarking.hessian.base_hessian.Hessian)
Implements SciPy finite difference approximations to the derivative
INCOMPATIBLE_PROBLEMS *= {'cs': ['mantid']}*[](#fitbenchmarking.hessian.scipy_hessian.Scipy.INCOMPATIBLE_PROBLEMS)
eval(*params*, ***kwargs*)[](#fitbenchmarking.hessian.scipy_hessian.Scipy.eval)
> Evaluates Hessian of problem.eval_model, returning the value
abla^2_p f(x, p)
> param params
> The parameter values to find the Hessian at
> type params
> list
> return
> Approximation of the Hessian
> rtype
> 3D numpy array
######## Module contents[](#module-fitbenchmarking.hessian)
####### fitbenchmarking.jacobian package[](#fitbenchmarking-jacobian-package)
######## Submodules[](#submodules)
######### fitbenchmarking.jacobian.analytic_jacobian module[](#module-fitbenchmarking.jacobian.analytic_jacobian)
Module which acts as a analytic Jacobian calculator
*class* fitbenchmarking.jacobian.analytic_jacobian.Analytic(*problem*)[](#fitbenchmarking.jacobian.analytic_jacobian.Analytic)
Bases: [`fitbenchmarking.jacobian.base_jacobian.Jacobian`](index.html#fitbenchmarking.jacobian.base_jacobian.Jacobian)
Class to apply an analytical Jacobian
eval(*params*, ***kwargs*)[](#fitbenchmarking.jacobian.analytic_jacobian.Analytic.eval)
Evaluates Jacobian of problem.eval_model
Parameters
**params** (*list*) – The parameter values to find the Jacobian at
Returns Approximation of the Jacobian
Return type numpy array
name() → str[](#fitbenchmarking.jacobian.analytic_jacobian.Analytic.name)
Get a name for the current status of the jacobian.
Returns A unique name for this jacobian/method combination
Return type str
######### fitbenchmarking.jacobian.base_jacobian module[](#module-fitbenchmarking.jacobian.base_jacobian)
Implements the base class for the Jacobian.
*class* fitbenchmarking.jacobian.base_jacobian.Jacobian(*problem*)[](#fitbenchmarking.jacobian.base_jacobian.Jacobian)
Bases: `object`
Base class for Jacobian.
INCOMPATIBLE_PROBLEMS *= {}*[](#fitbenchmarking.jacobian.base_jacobian.Jacobian.INCOMPATIBLE_PROBLEMS)
*abstract* eval(*params*, ***kwargs*)[](#fitbenchmarking.jacobian.base_jacobian.Jacobian.eval)
Evaluates Jacobian of the model, \(\nabla_p f(x,p)\),
at the point given by the parameters.
Parameters
**params** (*list*) – The parameter values at which to evaluate the Jacobian
Returns Computed Jacobian
Return type numpy array
*property* method[](#fitbenchmarking.jacobian.base_jacobian.Jacobian.method)
Utility function to get the numerical method
Returns the names of the parameters
Return type list of str
name() → str[](#fitbenchmarking.jacobian.base_jacobian.Jacobian.name)
Get a name for the current status of the jacobian.
Returns A unique name for this jacobian/method combination
Return type str
######### fitbenchmarking.jacobian.default_jacobian module[](#module-fitbenchmarking.jacobian.default_jacobian)
Module which uses the minimizer’s default jacobian
*class* fitbenchmarking.jacobian.default_jacobian.Default(*problem*)[](#fitbenchmarking.jacobian.default_jacobian.Default)
Bases: [`fitbenchmarking.jacobian.scipy_jacobian.Scipy`](index.html#fitbenchmarking.jacobian.scipy_jacobian.Scipy)
Use the minimizer’s jacobian/derivative approximation
eval(*params*, ***kwargs*)[](#fitbenchmarking.jacobian.default_jacobian.Default.eval)
Evaluates Jacobian of problem.eval_model
Parameters
**params** (*list*) – The parameter values to find the Jacobian at
Returns Approximation of the Jacobian
Return type numpy array
name() → str[](#fitbenchmarking.jacobian.default_jacobian.Default.name)
Get a name for the current status of the jacobian.
Returns A unique name for this jacobian/method combination
Return type str
######### fitbenchmarking.jacobian.jacobian_factory module[](#module-fitbenchmarking.jacobian.jacobian_factory)
This file contains a factory implementation for the Jacobians.
This is used to manage the imports and reduce effort in adding new Jacobian methods.
fitbenchmarking.jacobian.jacobian_factory.create_jacobian(*jac_method*)[](#fitbenchmarking.jacobian.jacobian_factory.create_jacobian)
Create a Jacobian class.
Parameters
**jac_method** (*str*) – Type of Jacobian selected from options
Returns Controller class for the problem
Return type fitbenchmarking.jacobian.base_controller.Jacobian subclass
######### fitbenchmarking.jacobian.numdifftools_jacobian module[](#fitbenchmarking-jacobian-numdifftools-jacobian-module)
######### fitbenchmarking.jacobian.scipy_jacobian module[](#module-fitbenchmarking.jacobian.scipy_jacobian)
Module which calculates SciPy finite difference approximations
*class* fitbenchmarking.jacobian.scipy_jacobian.Scipy(*problem*)[](#fitbenchmarking.jacobian.scipy_jacobian.Scipy)
Bases: [`fitbenchmarking.jacobian.base_jacobian.Jacobian`](index.html#fitbenchmarking.jacobian.base_jacobian.Jacobian)
Implements SciPy finite difference approximations to the derivative
INCOMPATIBLE_PROBLEMS *= {'cs': ['mantid']}*[](#fitbenchmarking.jacobian.scipy_jacobian.Scipy.INCOMPATIBLE_PROBLEMS)
eval(*params*, ***kwargs*)[](#fitbenchmarking.jacobian.scipy_jacobian.Scipy.eval)
Evaluates Jacobian of problem.eval_model
Parameters
**params** (*list*) – The parameter values to find the Jacobian at
Returns Approximation of the Jacobian
Return type numpy array
######## Module contents[](#module-fitbenchmarking.jacobian)
####### fitbenchmarking.parsing package[](#fitbenchmarking-parsing-package)
######## Submodules[](#submodules)
######### fitbenchmarking.parsing.base_parser module[](#module-fitbenchmarking.parsing.base_parser)
Implements the base Parser as a Context Manager.
*class* fitbenchmarking.parsing.base_parser.Parser(*filename*, *options*)[](#fitbenchmarking.parsing.base_parser.Parser)
Bases: `object`
Base abstract class for a parser.
Further parsers should inherit from this and override the abstract parse()
method.
*abstract* parse()[](#fitbenchmarking.parsing.base_parser.Parser.parse)
Parse the file into a FittingProblem.
Returns The parsed problem
Return type
[FittingProblem](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem)
######### fitbenchmarking.parsing.cutest_parser module[](#module-fitbenchmarking.parsing.cutest_parser)
This file calls the pycutest interface for SIF data
*class* fitbenchmarking.parsing.cutest_parser.CutestParser(*filename*, *options*)[](#fitbenchmarking.parsing.cutest_parser.CutestParser)
Bases: [`fitbenchmarking.parsing.base_parser.Parser`](index.html#fitbenchmarking.parsing.base_parser.Parser)
Use the pycutest interface to parse SIF files
This parser has some quirks.
Due to the complex nature of SIF files, we utilise pycutest to generate the function. This function returns the residual r(f,x,y,e) := (f(x) - y)/e for some parameters x.
For consistency we require the value to be f(x, p).
To avoid having to convert back from the residual in the evaluation, we store the data separately and set y to 0.0, and e to 1.0 for all datapoints.
We then accomodate for the missing x argument by caching x against an associated function f_x(p).
As such the function is defined by: f(x, p) := r(f_x,p,0,1)
If a new x is passed, we write a new file and parse it to generate a function. We define x to be new if `not np.isclose(x, cached_x)` for each
`cached_x` that has already been stored.
parse()[](#fitbenchmarking.parsing.cutest_parser.CutestParser.parse)
Get data into a Fitting Problem via cutest.
Returns The fully parsed fitting problem
Return type
[fitbenchmarking.parsing.fitting_problem.FittingProblem](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem)
######### fitbenchmarking.parsing.fitbenchmark_parser module[](#module-fitbenchmarking.parsing.fitbenchmark_parser)
This file implements a parser for the Fitbenchmark data format.
*class* fitbenchmarking.parsing.fitbenchmark_parser.FitbenchmarkParser(*filename*, *options*)[](#fitbenchmarking.parsing.fitbenchmark_parser.FitbenchmarkParser)
Bases: [`fitbenchmarking.parsing.base_parser.Parser`](index.html#fitbenchmarking.parsing.base_parser.Parser)
Parser for the native FitBenchmarking problem definition (FitBenchmark)
file.
parse() → [fitbenchmarking.parsing.fitting_problem.FittingProblem](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem)[](#fitbenchmarking.parsing.fitbenchmark_parser.FitbenchmarkParser.parse)
Parse the Fitbenchmark problem file into a Fitting Problem.
Returns The fully parsed fitting problem
Return type
[fitbenchmarking.parsing.fitting_problem.FittingProblem](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem)
######### fitbenchmarking.parsing.fitting_problem module[](#module-fitbenchmarking.parsing.fitting_problem)
Implements the FittingProblem class, this will be the object that inputs are parsed into before being passed to the controllers
*class* fitbenchmarking.parsing.fitting_problem.FittingProblem(*options*)[](#fitbenchmarking.parsing.fitting_problem.FittingProblem)
Bases: `object`
Definition of a fitting problem, which will be populated by a parser from a problem definition file.
Onces populated, this should include the data, the function and any other additional requirements from the data.
additional_info[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.additional_info)
*dict*
Container for software specific information.
This should be avoided if possible.
correct_data()[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.correct_data)
Strip data that overruns the start and end x_range,
and approximate errors if not given.
Modifications happen on member variables.
data_e[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.data_e)
*numpy array* The errors or weights
data_x[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.data_x)
*numpy array* The x-data
data_y[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.data_y)
*numpy array* The y-data
description[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.description)
*string* Description of the fitting problem
end_x[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.end_x)
*float* The end of the range to fit model data over
(if different from entire range) (/float/)
equation[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.equation)
*string* Equation (function or model) to fit against data
eval_model(*params*, ***kwargs*)[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.eval_model)
Function evaluation method
Parameters
**params** (*list*) – parameter value(s)
Returns data values evaluated from the function of the problem
Return type numpy array
format[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.format)
*string* Name of the problem definition type (e.g., ‘cutest’)
function[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.function)
Callable function
get_function_params(*params*)[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.get_function_params)
Return the function definition in a string format for output
Parameters
**params** (*list*) – The parameters to use in the function string
Returns Representation of the function example format: ‘b1 * (b2+x) | b1=-2.0, b2=50.0’
Return type string
hessian[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.hessian)
Callable function for the Hessian
ini_y(*parameter_set=0*)[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.ini_y)
Return the result of evaluating the problem at the initial parameters for plotting.
Parameters
**parameter_set** (*int**,* *optional*) – The initial parameters to use, defaults to 0
Returns The initial estimates
Return type numpy.ndarray
jacobian[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.jacobian)
Callable function for the Jacobian
multifit[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.multifit)
*bool* Used to check if a problem is using multifit.
multivariate[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.multivariate)
*bool*
Whether the function has been wrapped to reduce the dimension of x on function calls
name[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.name)
*string* Name (title) of the fitting problem
*property* param_names[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.param_names)
Utility function to get the parameter names
Returns the names of the parameters
Return type list of str
plot_scale[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.plot_scale)
*string* The plot scale for the y and x data
set_value_ranges(*value_ranges*)[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.set_value_ranges)
Function to format parameter bounds before passing to controllers,
so self.value_ranges is a list of tuples, which contain lower and upper bounds (lb,ub) for each parameter in the problem
Parameters
**value_ranges** (*dict*) –
dictionary of bounded parameter names withlower and upper bound values e.g.
`{p1_name: [p1_min, p1_max], ...}`
sorted_index[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.sorted_index)
*numpy array* The index for sorting the data (used in plotting)
start_x[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.start_x)
*float* The start of the range to fit model data over
(if different from entire range)
starting_values*: list*[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.starting_values)
*list of dict*
Starting values of the fitting parameters
e.g.
`[{p1_name: p1_val1, p2_name: p2_val1, ...},
{p1_name: p1_val2, ...}, ...]`
value_ranges[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.value_ranges)
*list*
Smallest and largest values of interest in the data
e.g.
`[(p1_min, p1_max), (p2_min, p2_max),...]`
verify()[](#fitbenchmarking.parsing.fitting_problem.FittingProblem.verify)
Basic check that minimal set of attributes have been set.
Raise FittingProblemError if object is not properly initialised.
fitbenchmarking.parsing.fitting_problem.correct_data(*x*, *y*, *e*, *startx*, *endx*, *use_errors*)[](#fitbenchmarking.parsing.fitting_problem.correct_data)
Strip data that overruns the start and end x_range,
and approximate errors if not given.
Parameters
* **x** (*np.array*) – x data
* **y** (*np.array*) – y data
* **e** (*np.array*) – error data
* **startx** (*float*) – minimum x value
* **endx** (*float*) – maximum x value
* **use_errors** (*bool*) – whether errors should be added if not present
######### fitbenchmarking.parsing.horace_parser module[](#module-fitbenchmarking.parsing.horace_parser)
This file implements a parser for Horace problem sets.
*class* fitbenchmarking.parsing.horace_parser.HoraceParser(**args*, ***kwargs*)[](#fitbenchmarking.parsing.horace_parser.HoraceParser)
Bases: [`fitbenchmarking.parsing.fitbenchmark_parser.FitbenchmarkParser`](index.html#fitbenchmarking.parsing.fitbenchmark_parser.FitbenchmarkParser)
Parser for a Horace problem definition file.
fitbenchmarking.parsing.horace_parser.horace_on()[](#fitbenchmarking.parsing.horace_parser.horace_on)
Turning Horace and SpinW on in matlab
######### fitbenchmarking.parsing.ivp_parser module[](#module-fitbenchmarking.parsing.ivp_parser)
This file implements a parser for the IVP data format.
*class* fitbenchmarking.parsing.ivp_parser.IVPParser(*filename*, *options*)[](#fitbenchmarking.parsing.ivp_parser.IVPParser)
Bases: [`fitbenchmarking.parsing.fitbenchmark_parser.FitbenchmarkParser`](index.html#fitbenchmarking.parsing.fitbenchmark_parser.FitbenchmarkParser)
Parser for a IVP problem definition file.
######### fitbenchmarking.parsing.mantid_parser module[](#module-fitbenchmarking.parsing.mantid_parser)
This file implements a parser for the Mantid data format.
*class* fitbenchmarking.parsing.mantid_parser.MantidParser(*filename*, *options*)[](#fitbenchmarking.parsing.mantid_parser.MantidParser)
Bases: [`fitbenchmarking.parsing.fitbenchmark_parser.FitbenchmarkParser`](index.html#fitbenchmarking.parsing.fitbenchmark_parser.FitbenchmarkParser)
Parser for a Mantid problem definition file.
######### fitbenchmarking.parsing.nist_data_functions module[](#module-fitbenchmarking.parsing.nist_data_functions)
Functions that prepare the function specified in NIST problem definition file into the right format for SciPy.
fitbenchmarking.parsing.nist_data_functions.format_function_scipy(*function*)[](#fitbenchmarking.parsing.nist_data_functions.format_function_scipy)
Formats the function string such that it is scipy-ready.
Parameters
**function** (*str*) – The function to be formatted
Returns The formatted function
Return type str
fitbenchmarking.parsing.nist_data_functions.is_int(*value*)[](#fitbenchmarking.parsing.nist_data_functions.is_int)
Checks to see if a value is an integer or not
Parameters
**value** (*str*) – String representation of an equation
Returns Whether or not value is an int
Return type bool
fitbenchmarking.parsing.nist_data_functions.is_safe(*func_str*)[](#fitbenchmarking.parsing.nist_data_functions.is_safe)
Verifies that a string is safe to be passed to exec in the context of an equation.
Parameters
**func_str** (*string*) – The function to be checked
Returns Whether the string is of the expected format for an equation
Return type bool
fitbenchmarking.parsing.nist_data_functions.nist_func_definition(*function*, *param_names*)[](#fitbenchmarking.parsing.nist_data_functions.nist_func_definition)
Processing a function plus different set of starting values as specified in the NIST problem definition file into a callable
Parameters
* **function** (*str*) – function string as defined in a NIST problem definition file
* **param_names** (*list*) – names of the parameters in the function
Returns callable function
Return type callable
fitbenchmarking.parsing.nist_data_functions.nist_hessian_definition(*hessian*, *param_names*)[](#fitbenchmarking.parsing.nist_data_functions.nist_hessian_definition)
Processing a Hessian into a callable
Parameters
* **hessian** (*str*) – Hessian string as defined in the data files for the corresponding NIST problem definition file
* **param_names** (*list*) – names of the parameters in the function
Returns callable function
Return type callable
fitbenchmarking.parsing.nist_data_functions.nist_jacobian_definition(*jacobian*, *param_names*)[](#fitbenchmarking.parsing.nist_data_functions.nist_jacobian_definition)
Processing a Jacobian plus different set of starting values as specified in the NIST problem definition file into a callable
Parameters
* **jacobian** (*str*) – Jacobian string as defined in the data files for the corresponding NIST problem definition file
* **param_names** (*list*) – names of the parameters in the function
Returns callable function
Return type callable
######### fitbenchmarking.parsing.nist_parser module[](#module-fitbenchmarking.parsing.nist_parser)
This file implements a parser for the NIST style data format.
*class* fitbenchmarking.parsing.nist_parser.NISTParser(*filename*, *options*)[](#fitbenchmarking.parsing.nist_parser.NISTParser)
Bases: [`fitbenchmarking.parsing.base_parser.Parser`](index.html#fitbenchmarking.parsing.base_parser.Parser)
Parser for the NIST problem definition file.
parse()[](#fitbenchmarking.parsing.nist_parser.NISTParser.parse)
Parse the NIST problem file into a Fitting Problem.
Returns The fully parsed fitting problem
Return type
[fitbenchmarking.parsing.fitting_problem.FittingProblem](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem)
######### fitbenchmarking.parsing.parser_factory module[](#module-fitbenchmarking.parsing.parser_factory)
This file contains a factory implementation for the parsers.
This is used to manage the imports and reduce effort in adding new parsers.
*class* fitbenchmarking.parsing.parser_factory.ParserFactory[](#fitbenchmarking.parsing.parser_factory.ParserFactory)
Bases: `object`
A factory for creating parsers.
This has the capability to select the correct parser, import it, and generate an instance of it.
Parsers generated from this must be a subclass of base_parser.Parser
*static* create_parser(*filename*)[](#fitbenchmarking.parsing.parser_factory.ParserFactory.create_parser)
Inspect the input and create a parser that matches the required file.
Parameters
**filename** (*string*) – The path to the file to be parsed
Returns Parser for the problem
Return type fitbenchmarking.parsing.base_parser.Parser subclass
fitbenchmarking.parsing.parser_factory.parse_problem_file(*prob_file*, *options*)[](#fitbenchmarking.parsing.parser_factory.parse_problem_file)
Loads the problem file into a fitting problem using the correct parser.
Parameters
* **prob_file** (*string*) – path to the problem file
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – all the information specified by the user
Returns problem object with fitting information
Return type
[fitbenchmarking.parsing.fitting_problem.FittingProblem](index.html#fitbenchmarking.parsing.fitting_problem.FittingProblem)
######### fitbenchmarking.parsing.sasview_parser module[](#fitbenchmarking-parsing-sasview-parser-module)
######## Module contents[](#module-fitbenchmarking.parsing)
####### fitbenchmarking.results_processing package[](#fitbenchmarking-results-processing-package)
######## Submodules[](#submodules)
######### fitbenchmarking.results_processing.acc_table module[](#module-fitbenchmarking.results_processing.acc_table)
Accuracy table
*class* fitbenchmarking.results_processing.acc_table.AccTable(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)[](#fitbenchmarking.results_processing.acc_table.AccTable)
Bases: [`fitbenchmarking.results_processing.base_table.Table`](index.html#fitbenchmarking.results_processing.base_table.Table)
The accuracy results are calculated by evaluating the cost function with the fitted parameters.
get_value(*result*)[](#fitbenchmarking.results_processing.acc_table.AccTable.get_value)
Gets the main value to be reported in the tables for a given result
Note that the first value (relative accuracy) will be used in the default colour handling.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to generate the values for.
Returns The normalised chi sq with respect to the smallest accuracy value and absolute accuracy for the result.
Return type tuple(float, float)
######### fitbenchmarking.results_processing.base_table module[](#module-fitbenchmarking.results_processing.base_table)
Implements the base class for the tables.
*class* fitbenchmarking.results_processing.base_table.Table(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)[](#fitbenchmarking.results_processing.base_table.Table)
Bases: `object`
Base class for the FitBenchmarking HTML and text output tables.
When inheriting from this, it may be useful to override the following functions as required:
* get_value
* display_str
* get_error_str
* get_link_str
create_pandas_data_frame(*html=False*)[](#fitbenchmarking.results_processing.base_table.Table.create_pandas_data_frame)
Creates a pandas data frame of results
Parameters
**html** (*bool. defaults to False*) – Whether to make the dataframe for html or plain text
Returns DataFrame with string representations of results
Return type pandas.DataFrame
create_results_dict()[](#fitbenchmarking.results_processing.base_table.Table.create_results_dict)
Generate a dictionary of results lists with rows and columns as the key and list elements respectively.
This is used to create HTML and csv tables.
This is stored in self.sorted_results
display_str(*value*)[](#fitbenchmarking.results_processing.base_table.Table.display_str)
Converts a value generated by
[`get_value()`](#fitbenchmarking.results_processing.base_table.Table.get_value)
into a string respresentation to be used in the tables.
Base class implementation takes the relative and absolute values and uses `self.output_string_type`
as a template for the string format. This can be overridden to adequately display the results.
Parameters
**value** (*tuple*) – Relative and absolute values
Returns string representation of the value for display in the table.
Return type str
*property* file_path[](#fitbenchmarking.results_processing.base_table.Table.file_path)
Getter function for the path to the table
Returns path to table
Return type str
get_colour_df(*like_df=None*)[](#fitbenchmarking.results_processing.base_table.Table.get_colour_df)
Generate a dataframe of colours to add to the html rendering.
If like_df is passed this will use the column and row indexes of that dataframe.
Parameters
**like_df** (*pandas.DataFrame*) – The dataframe to copy headings from. Defaults to None.
Returns A dataframe with colourings as strings
Return type pandas.DataFrame
get_colours_for_row(*results*)[](#fitbenchmarking.results_processing.base_table.Table.get_colours_for_row)
Get the colours as strings for the given results in the table.
The base class implementation, for example,
uses the first value from self.get_value and
`colour_map`, `colour_ulim` and `cmap_range` within
[`Options`](index.html#fitbenchmarking.utils.options.Options).
Parameters
**result** (*list**[*[*fitbenchmarking.utils.fitbm_result.FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]*) – Results to get the colours for.
Returns The colour to use for each cell in the list and Foreground colours for the text as html rgb strings e.g. ‘rgb(255, 255, 255)’
Return type tuple[list[str], list[str]]
get_description()[](#fitbenchmarking.results_processing.base_table.Table.get_description)
Generates table description from class docstrings and converts them into html
Returns Dictionary containing table descriptions
Return type dict
*static* get_error_str(*result*, *error_template='[{}]'*)[](#fitbenchmarking.results_processing.base_table.Table.get_error_str)
Get the error string for a result based on error_template This can be overridden if tables require different error formatting.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to get the error string for
Returns A string representation of the error
Return type str
get_hyperlink(*result*, *val_str*, *text_col*)[](#fitbenchmarking.results_processing.base_table.Table.get_hyperlink)
Generates the hyperlink for a given result
Parameters
* **result** (*fitbenchmarking.utils.ftibm_result.FittingResult*) – The result to generate a string for
* **val_str** (*str*) – Preprocessed val_str to display
* **text_col** (*str*) – Foreground colour for the text as html rgb strings e.g. ‘rgb(255, 255, 255)’
Returns The hyperlink representation.
Return type str
get_link_str(*result*)[](#fitbenchmarking.results_processing.base_table.Table.get_link_str)
Get the link as a string for the result.
This can be overridden if tables require different links.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to get the link for
Returns The link to go to when the cell is selected
Return type string
get_str_dict(*html=False*)[](#fitbenchmarking.results_processing.base_table.Table.get_str_dict)
Create a dictionary with the table values as strings for display.
Returns The dictionary of strings for the table
Return type dict[list[str]]
get_str_result(*result*, *text_col=None*, *html=False*)[](#fitbenchmarking.results_processing.base_table.Table.get_str_result)
Given a single result, generate the string to display in this table.
The html flag can be used to switch between a plain text and html format.
This is intended to be easily extensible by overriding the following functions:
* get_value
* display_str
* get_error_str
* get_link_str
If you find yourself overriding this, please consider if changes could be made to allow future tables to benefit.
Parameters
* **result** (*fitbenchmarking.utils.ftibm_result.FittingResult*) – The result to generate a string for
* **text_col** (*list**[**str**]*) – Foreground colours for the text as html rgb strings e.g. ‘rgb(255, 255, 255)’
* **html** (*bool*) – Flag to control whether to generate a html string or plain text. Defaults to False.
Returns The string representation.
Return type str
*abstract* get_value(*result*)[](#fitbenchmarking.results_processing.base_table.Table.get_value)
Gets the main value to be reported in the tables for a given result
If more than one value is returned please note that the first value will be used in the default colour handling.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to generate the values for.
Returns The value to convert to a string for the tables
Return type tuple(float)
minimizer_dropdown_html() → str[](#fitbenchmarking.results_processing.base_table.Table.minimizer_dropdown_html)
Generates the HTML for a dropdown checklist of minimizers.
Returns HTML for a dropdown checklist of minimizers.
Return type str
problem_dropdown_html() → str[](#fitbenchmarking.results_processing.base_table.Table.problem_dropdown_html)
Generates the HTML for a dropdown checklist of problem sets.
Returns HTML for a dropdown checklist of problem sets.
Return type str
save_colourbar(*fig_dir*, *n_divs=100*, *sz_in=(3, 0.8)*) → str[](#fitbenchmarking.results_processing.base_table.Table.save_colourbar)
Generates a png of a labelled colourbar using matplotlib.
Parameters
* **fig_dir** (*str*) – path to figures directory
* **n_divs** (*int*) – number of divisions of shading in colourbar
* **sz_in** (*list**[**float**]* *- 2 elements*) – dimensions of png in inches [width, height]
Returns The relative path to the colourbar image.
Return type str
*property* table_title[](#fitbenchmarking.results_processing.base_table.Table.table_title)
Getter function for table name if self._table_title is None
Returns name of table
Return type str
to_csv_file()[](#fitbenchmarking.results_processing.base_table.Table.to_csv_file)
Generate a plain text version of the table
Returns Plain text table output
Return type str
to_html()[](#fitbenchmarking.results_processing.base_table.Table.to_html)
Generate a html version of the table.
Returns HTML table output
Return type str
*static* vals_to_colour(*vals*, *cmap*, *cmap_range*, *log_ulim*)[](#fitbenchmarking.results_processing.base_table.Table.vals_to_colour)
Converts an array of values to a list of hexadecimal colour strings using logarithmic sampling from a matplotlib colourmap according to relative value.
Parameters
* **vals** (*list**[**float**]*) – values in the range [0, 1] to convert to colour strings
* **cmap** (*matplotlib colourmap object*) – matplotlib colourmap
* **cmap_range** (*list**[**float**]**,* *2 elements*) – values in range [0, 1] for colourmap cropping
* **log_ulim** (*float*) – log10 of worst shading cutoff value
Returns Colours as hex strings for each input value and Foreground colours for the text as html rgb strings e.g. ‘rgb(255, 255, 255)’
Return type tuple[list[str], list[str]]
fitbenchmarking.results_processing.base_table.background_to_text(*background_col*, *contrast_threshold*)[](#fitbenchmarking.results_processing.base_table.background_to_text)
Determines the foreground color for the table elements. The optimum color is selected from two options - white and black. White is selected if its contrast ratio is greater than the contrast threshold. However,
If the contrast ratio of white text does not meet the requirement, then the text color which provides the greatest contrast ratio is selected.
Parameters
* **background_col** – a list of r,g,b values [0, 255]
* **contrast_threshold** (*float*) – the threshold value [0, 21]
Returns Foreground colour for the text as html rgb strings e.g. ‘rgb(255, 255, 255)’
Return type str
fitbenchmarking.results_processing.base_table.calculate_contrast(*background*, *foreground*)[](#fitbenchmarking.results_processing.base_table.calculate_contrast)
Calculates the contrast ratio between the background and foreground colors. Visit link for more info:
<https://www.w3.org/WAI/WCAG21/Understanding/contrast-minimum.html#dfn-contrast-ratioParameters
* **background** – list of r, g, b values representing background [0, 255]
* **foreground** (*list**[**int**]*) – list of r, g, b values representing foreground [0, 255]
Returns the contrast ratio [0, 21]
Return type float
fitbenchmarking.results_processing.base_table.calculate_luminance(*rgb*)[](#fitbenchmarking.results_processing.base_table.calculate_luminance)
Calculates the relative luminance. Visit link for more info:
<https://www.w3.org/WAI/WCAG21/Understanding/contrast-minimum.html#dfn-relative-luminanceParameters
**rgb** – a list containing r, g, b values [0, 255]
Returns the luminance [0, 1]
Return type float
######### fitbenchmarking.results_processing.compare_table module[](#module-fitbenchmarking.results_processing.compare_table)
compare table
*class* fitbenchmarking.results_processing.compare_table.CompareTable(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)[](#fitbenchmarking.results_processing.compare_table.CompareTable)
Bases: [`fitbenchmarking.results_processing.base_table.Table`](index.html#fitbenchmarking.results_processing.base_table.Table)
The combined results show the accuracy in the first line of the cell and the runtime on the second line of the cell.
display_str(*value*)[](#fitbenchmarking.results_processing.compare_table.CompareTable.display_str)
Combine the accuracy and runtime values into a string representation.
Parameters
**value** (*list**[**list**[**float**]**]*) – Relative and absolute values for accuracy and runtime
[[acc_rel, runtime_rel], [acc_abs, runtime_abs]]
Returns string representation of the value for display in the table.
Return type str
get_hyperlink(*result*, *val_str*, *text_col*)[](#fitbenchmarking.results_processing.compare_table.CompareTable.get_hyperlink)
Generates the hyperlink for a given result
Parameters
* **result** (*fitbenchmarking.utils.ftibm_result.FittingResult*) – The result to generate a string for
* **val_str** (*str*) – Preprocessed val_str to display
* **text_col** (*str*) – Foreground colour for the text as html rgb strings e.g. ‘rgb(255, 255, 255)’
Returns The hyperlink representation.
Return type str
get_value(*result*)[](#fitbenchmarking.results_processing.compare_table.CompareTable.get_value)
Gets the main value to be reported in the tables for a given result
Note that the first value (relative chi_sq) will be used in the default colour handling.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to generate the values for.
Returns The normalised chi sq and runtime with respect to the smallest chi_sq and runtime respectively as well as the absolute values for both chi_sq and runtime for the result.
[[acc_rel, runtime_rel], [acc_abs, runtime_abs]]
Return type list[list[float]]
vals_to_colour(*vals*, **args*)[](#fitbenchmarking.results_processing.compare_table.CompareTable.vals_to_colour)
Override vals_to_colour to allow it to run for both accuracy and runtime.
Parameters
**vals** (*list**[**list**[**float**,* *float**]**]*) – The relative values to get the colours for
Returns The background colours for the acc and runtime values and The text colours for the acc and runtime values
Return type tuple[zip[list[str], list[str]], zip[list[str], list[str]]]
######### fitbenchmarking.results_processing.emissions_table module[](#module-fitbenchmarking.results_processing.emissions_table)
Emissions table
*class* fitbenchmarking.results_processing.emissions_table.EmissionsTable(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)[](#fitbenchmarking.results_processing.emissions_table.EmissionsTable)
Bases: [`fitbenchmarking.results_processing.base_table.Table`](index.html#fitbenchmarking.results_processing.base_table.Table)
The emissions (kg CO2eq) results are calculated from an average (over num_runs) using the
[codecarbon](https://mlco2.github.io/codecarbon/index.html) module.
num_runs is set in [FitBenchmarking Options](index.html#options).
Configuration for codecarbon is set in `.codecarbon.config`.
Please note that for tracking CPU power usage on Windows or Mac,
`Intel Power Gadget` shoud also be installed. For more information,
see the Methodology section of the [codecarbon docs](https://mlco2.github.io/codecarbon/methodology.html#cpu).
get_value(*result*)[](#fitbenchmarking.results_processing.emissions_table.EmissionsTable.get_value)
Gets the main value to be reported in the tables for a given result
Note that the first value (relative emissions) will be used in the default colour handling.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to generate the values for.
Returns The normalised emissions with respect to the smallest emissions value and absolute emissions for the result.
Return type tuple(float, float)
######### fitbenchmarking.results_processing.fitting_report module[](#module-fitbenchmarking.results_processing.fitting_report)
Set up and build the fitting reports for various types of problems.
fitbenchmarking.results_processing.fitting_report.create(*results*, *support_pages_dir*, *options*)[](#fitbenchmarking.results_processing.fitting_report.create)
Iterate through problem results and create a fitting report html page for each.
Parameters
* **results** (*list**[*[*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]*) – results object
* **support_pages_dir** (*str*) – directory in which the results are saved
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The options used in the fitting problem and plotting
fitbenchmarking.results_processing.fitting_report.create_prob_group(*result*, *support_pages_dir*, *options*)[](#fitbenchmarking.results_processing.fitting_report.create_prob_group)
Creates a fitting report containing figures and other details about the fit for a problem.
A link to the fitting report is stored in the results object.
Parameters
* **result** ([*fitbenchmarking.utils.fitbm_result.FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result for a specific benchmark problem-minimizer-etc combination
* **support_pages_dir** (*str*) – directory to store the support pages in
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The options used in the fitting problem and plotting
fitbenchmarking.results_processing.fitting_report.get_figure_paths(*result*)[](#fitbenchmarking.results_processing.fitting_report.get_figure_paths)
Get the paths to the figures used in the support page.
Parameters
**result** (*fitbenchmarking.utils.fitbm_result.FittingProblem*) – The result to get the figures for
Returns the paths to the required figures
Return type tuple(str, str)
######### fitbenchmarking.results_processing.local_min_table module[](#module-fitbenchmarking.results_processing.local_min_table)
compare table
*class* fitbenchmarking.results_processing.local_min_table.LocalMinTable(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)[](#fitbenchmarking.results_processing.local_min_table.LocalMinTable)
Bases: [`fitbenchmarking.results_processing.base_table.Table`](index.html#fitbenchmarking.results_processing.base_table.Table)
The local min results shows a `True` or `False` value together with
\(\frac{|| J^T r||}{||r||}\). The `True` or `False` indicates whether the software finds a minimum with respect to the following criteria:
* \(||r|| \leq\) RES_TOL,
* \(|| J^T r|| \leq\) GRAD_TOL,
* \(\frac{|| J^T r||}{||r||} \leq\) GRAD_TOL,
where \(J\) and \(r\) are the Jacobian and residual of
\(f(x, p)\), respectively. The tolerances can be found in the results object.
display_str(*value*)[](#fitbenchmarking.results_processing.local_min_table.LocalMinTable.display_str)
Combine the boolean value from variable local_min with the normalised residual
Parameters
**value** (*bool**,* *float*) – Whether the minimizer found a local minimizer and the
\(\frac{|| J^T r||}{||r||}\) value
Returns string representation of the value for display in the table.
Return type str
get_description()[](#fitbenchmarking.results_processing.local_min_table.LocalMinTable.get_description)
Generates table description from class docstrings and converts them into html
Returns Dictionary containing table descriptions
Return type dict
*classmethod* get_error_str(*result*, **args*, ***kwargs*)[](#fitbenchmarking.results_processing.local_min_table.LocalMinTable.get_error_str)
Get the error string for a result based on error_template This can be overridden if tables require different error formatting.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to get the error string for
Returns A string representation of the error
Return type str
get_value(*result*)[](#fitbenchmarking.results_processing.local_min_table.LocalMinTable.get_value)
Gets the main value to be reported in the tables for a given result
Note that the first value (relative accuracy) will be used in the default colour handling.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to generate the values for.
Returns Whether the minimizer found a local minimizer (under the tests specified above) and \(\frac{|| J^T r||}{||r||}\)
Return type bool, float
save_colourbar(*fig_dir*, *n_divs=2*, *sz_in=None*) → str[](#fitbenchmarking.results_processing.local_min_table.LocalMinTable.save_colourbar)
Override default save_colourbar as there are only 2 possible divisions of the colour map (true or false).
Parameters
* **fig_dir** (*str*) – path to figures directory
* **n_divs** (*int**,* *Fixed to 2*) – **Unused** number of divisions of shading in colourbar
* **sz_in** (*list**[**float**]* *- 2 elements*) – dimensions of png in inches [width, height]
Returns The relative path to the colourbar image.
Return type str
*static* vals_to_colour(*vals*, *cmap*, *cmap_range*, *log_ulim*)[](#fitbenchmarking.results_processing.local_min_table.LocalMinTable.vals_to_colour)
Converts an array of values to a list of hexadecimal colour strings using sampling from a matplotlib colourmap according to whether a minimum was found.
Set to the bottom of the range if minimum was found, otherwise set to the top of the range.
Parameters
* **vals** (*list**[**float**]*) – values in the range [0, 1] to convert to colour strings
* **cmap** (*matplotlib colourmap object*) – matplotlib colourmap
* **cmap_range** (*list**[**float**]**,* *2 elements*) – values in range [0, 1] for colourmap cropping
* **log_ulim** (*float*) – **Unused** log10 of worst shading cutoff value
Returns Colours as hex strings for each input value and Foreground colours for the text as html rgb strings e.g. ‘rgb(255, 255, 255)’
Return type tuple[list[str], list[str]]
######### fitbenchmarking.results_processing.performance_profiler module[](#module-fitbenchmarking.results_processing.performance_profiler)
Set up performance profiles for both accuracy and runtime tables
fitbenchmarking.results_processing.performance_profiler.create_plot(*ax*, *step_values: list[np.ndarray]*, *solvers: list[str]*)[](#fitbenchmarking.results_processing.performance_profiler.create_plot)
Function to draw the profile on a matplotlib axis
Parameters
* **ax** (an .axes.SubplotBase subclass of ~.axes.Axes
(or a subclass of ~.axes.Axes)) – A matplotlib axis to be filled
* **step_values** (*list of np.array**[**float**]*) – a sorted list of the values of the metric being profiled
* **solvers** (*list of strings*) – A list of the labels for the different solvers
fitbenchmarking.results_processing.performance_profiler.plot(*acc*, *runtime*, *fig_dir*)[](#fitbenchmarking.results_processing.performance_profiler.plot)
Function that generates profiler plots
Parameters
* **acc** (*dict*) – acc dictionary containing number of occurrences
* **runtime** (*dict*) – runtime dictionary containing number of occurrences
* **fig_dir** (*str*) – path to directory containing the figures
Returns path to acc and runtime profile graphs
Return type tuple(str, str)
fitbenchmarking.results_processing.performance_profiler.prepare_profile_data(*results*)[](#fitbenchmarking.results_processing.performance_profiler.prepare_profile_data)
Helper function which generates acc and runtime dictionaries which contain the values for each minimizer.
Parameters
**results** (*dict**[**str**,* *dict**[**str**,* *list**[*[*utils.fitbm_result.FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]**]**]*) – The sorted results grouped by row and category
Returns dictionary containing number of occurrences
Return type tuple(dict, dict)
fitbenchmarking.results_processing.performance_profiler.profile(*results*, *fig_dir*)[](#fitbenchmarking.results_processing.performance_profiler.profile)
Function that generates profiler plots
Parameters
* **results** (*dict**[**str**,* *dict**[**str**,* *list**[*[*utils.fitbm_result.FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]**]**]*) – The sorted results grouped by row and category
* **fig_dir** (*str*) – path to directory containing the figures
Returns path to acc and runtime profile graphs
Return type tuple(str, str)
######### fitbenchmarking.results_processing.plots module[](#module-fitbenchmarking.results_processing.plots)
Higher level functions that are used for plotting the fit plot and a starting guess plot.
*class* fitbenchmarking.results_processing.plots.Plot(*best_result*, *options*, *figures_dir*)[](#fitbenchmarking.results_processing.plots.Plot)
Bases: `object`
Class providing plotting functionality.
best_fit_plot_options *= {'color': '#6699ff', 'linestyle': ':', 'linewidth': 3, 'marker': '', 'zorder': 3}*[](#fitbenchmarking.results_processing.plots.Plot.best_fit_plot_options)
data_plot_options *= {'color': 'black', 'label': 'Data', 'linestyle': '', 'linewidth': 1, 'marker': 'x', 'zorder': 0}*[](#fitbenchmarking.results_processing.plots.Plot.data_plot_options)
fit_plot_options *= {'color': '#99ff66', 'linestyle': '-', 'linewidth': 3, 'marker': '', 'zorder': 2}*[](#fitbenchmarking.results_processing.plots.Plot.fit_plot_options)
format_plot()[](#fitbenchmarking.results_processing.plots.Plot.format_plot)
Performs post plot processing to annotate the plot correctly
ini_guess_plot_options *= {'color': '#ff6699', 'label': 'Starting Guess', 'linestyle': '-', 'linewidth': 3, 'marker': '', 'zorder': 1}*[](#fitbenchmarking.results_processing.plots.Plot.ini_guess_plot_options)
plot_best(*result*)[](#fitbenchmarking.results_processing.plots.Plot.plot_best)
Plots the fit along with the data using the “best_fit” style and saves to a file
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to plot
Returns path to the saved file
Return type str
plot_data(*errors*, *plot_options*, *x=None*, *y=None*)[](#fitbenchmarking.results_processing.plots.Plot.plot_data)
Plots the data given
Parameters
* **errors** (*bool*) – whether fit minimizer uses errors
* **plot_options** (*dict*) – Values for style of the data to plot,
for example color and zorder
* **x** (*np.array*) – x values to be plotted
* **y** (*np.array*) – y values to be plotted
plot_fit(*result*)[](#fitbenchmarking.results_processing.plots.Plot.plot_fit)
Updates self.line to show the fit using the passed in params.
If self.line is empty it will create a new line.
Stores the plot in a file
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to plot
Returns path to the saved file
Return type str
plot_initial_guess()[](#fitbenchmarking.results_processing.plots.Plot.plot_initial_guess)
Plots the initial guess along with the data and stores in a file
Returns path to the saved file
Return type str
*classmethod* plot_summary(*categories*, *title*, *options*, *figures_dir*)[](#fitbenchmarking.results_processing.plots.Plot.plot_summary)
Create a comparison plot showing all fits from the results with the best for each category highlighted.
Parameters
* **categories** (*dict**[**str**,* *list**[**FittingResults**]**]*) – The results to plot sorted into colour groups
* **title** (*str*) – A title for the graph
* **options** ([*utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The options for the run
* **figures_dir** (*str*) – The directory to save the figures in
Returns The path to the new plot
Return type str
summary_best_plot_options *= {'linestyle': '-', 'linewidth': 2, 'marker': '', 'zorder': 2}*[](#fitbenchmarking.results_processing.plots.Plot.summary_best_plot_options)
summary_plot_options *= {'alpha': 0.5, 'linestyle': '-', 'linewidth': 1, 'marker': '', 'zorder': 1}*[](#fitbenchmarking.results_processing.plots.Plot.summary_plot_options)
######### fitbenchmarking.results_processing.problem_summary_page module[](#module-fitbenchmarking.results_processing.problem_summary_page)
Create the summary pages for the best minimizers.
fitbenchmarking.results_processing.problem_summary_page.create(*results*, *best_results*, *support_pages_dir*, *figures_dir*, *options*)[](#fitbenchmarking.results_processing.problem_summary_page.create)
Create the problem summary pages.
Parameters
* **results** (*dict**[**str**,* *dict**[**str**,* *list**[*[*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]**]**]*) – The results to create summary pages for
* **best_results** (*dict**[**str**,* *dict**[**str**,* [*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]**]*) – The best result from each row and category
* **support_pages_dir** (*str*) – directory in which the results are to be saved
* **figures_dir** (*str*) – The directory where figures are stored.
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The options used in the fitting problem and plotting
######### fitbenchmarking.results_processing.runtime_table module[](#module-fitbenchmarking.results_processing.runtime_table)
Runtime table
*class* fitbenchmarking.results_processing.runtime_table.RuntimeTable(*results*, *best_results*, *options*, *group_dir*, *pp_locations*, *table_name*)[](#fitbenchmarking.results_processing.runtime_table.RuntimeTable)
Bases: [`fitbenchmarking.results_processing.base_table.Table`](index.html#fitbenchmarking.results_processing.base_table.Table)
The timing results are calculated from an average (over num_runs) using the
[timeit](https://docs.python.org/2/library/timeit.html) module in python. num_runs is set in [FitBenchmarking Options](index.html#options).
get_value(*result*)[](#fitbenchmarking.results_processing.runtime_table.RuntimeTable.get_value)
Gets the main value to be reported in the tables for a given result
Note that the first value (relative runtime) will be used in the default colour handling.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to generate the values for.
Returns The normalised runtime with respect to the smallest runtime and absolute runtime for the result.
Return type tuple(float, float)
######### fitbenchmarking.results_processing.tables module[](#module-fitbenchmarking.results_processing.tables)
Set up and build the results tables.
fitbenchmarking.results_processing.tables.create_results_tables(*options*, *results*, *best_results*, *group_dir*, *fig_dir*, *pp_locations*, *failed_problems*, *unselected_minimzers*)[](#fitbenchmarking.results_processing.tables.create_results_tables)
Saves the results of the fitting to html/csv tables.
Parameters
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The options used in the fitting problem and plotting
* **results** (*dict**[**str**,* *dict**[**str**,* *list**[*[*utils.fitbm_result.FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]**]**]*) – Results grouped by row and category (for colouring)
* **best_results** (*dict**[**str**,* *dict**[**str**,* [*utils.fitbm_result.FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]**]*) – The best results from each row/category
* **group_dir** (*str*) – path to the directory where group results should be stored
* **fig_dir** (*str*) – path to the directory where figures should be stored
* **pp_locations** (*tuple**(**str**,**str**)*) – tuple containing the locations of the performance profiles (acc then runtime)
* **failed_problems** (*list*) – list of failed problems to be reported in the html output
Params unselected_minimzers Dictionary containing unselected minimizers based on the algorithm_type option
Returns filepaths to each table e.g {‘acc’: <acc-table-filename>, ‘runtime’: …}
and dictionary of table descriptions
Return type tuple(dict, dict)
fitbenchmarking.results_processing.tables.generate_table(*results*, *best_results*, *options*, *group_dir*, *fig_dir*, *pp_locations*, *table_name*, *suffix*)[](#fitbenchmarking.results_processing.tables.generate_table)
Generate html/csv tables.
Parameters
* **results** (*dict**[**str**,* *dict**[**str**,* *list**[*[*utils.fitbm_result.FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]**]**]*) – Results grouped by row and category (for colouring)
* **best_results** (*dict**[**str**,* *dict**[**str**,* [*utils.fitbm_result.FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*]**]*) – The best results from each row/category
* **options** ([*fitbenchmarking.utils.options.Options*](index.html#fitbenchmarking.utils.options.Options)) – The options used in the fitting problem and plotting
* **group_dir** (*str*) – path to the directory where group results should be stored
* **fig_dir** (*str*) – path to the directory where figures should be stored
* **pp_locations** (*tuple**(**str**,**str**)*) – tuple containing the locations of the performance profiles (acc then runtime)
* **table_name** (*str*) – name of the table
* **suffix** (*str*) – table suffix
Returns
(Table object, Dict of HTML strings for table and dropdowns,
text string of table, path to colourbar)
:rtype: tuple(Table object, dict{str: str}, str, str)
fitbenchmarking.results_processing.tables.load_table(*table*)[](#fitbenchmarking.results_processing.tables.load_table)
Create and return table object.
Parameters
**table** (*string*) – The name of the table to create a table for
Returns Table class for the problem
Return type fitbenchmarking/results_processing/tables.Table subclass
######## Module contents[](#module-fitbenchmarking.results_processing)
####### fitbenchmarking.utils package[](#fitbenchmarking-utils-package)
######## Submodules[](#submodules)
######### fitbenchmarking.utils.checkpoint module[](#module-fitbenchmarking.utils.checkpoint)
This file implements a checkpointclass to save results as they are generated.
This is also used to read in checkpointed data.
*class* fitbenchmarking.utils.checkpoint.Checkpoint(*options: [Options](index.html#fitbenchmarking.utils.options.Options)*)[](#fitbenchmarking.utils.checkpoint.Checkpoint)
Bases: `object`
A class to build a checkpoint file.
Each run can be added to the file as they finish.
This class must be finalised to create the checkpoint.
add_result(*result: [fitbenchmarking.utils.fitbm_result.FittingResult](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)*)[](#fitbenchmarking.utils.checkpoint.Checkpoint.add_result)
Add the result to the checkpoint file.
Parameters
**result** ([*FittingResult*](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)) – The result to add.
finalise()[](#fitbenchmarking.utils.checkpoint.Checkpoint.finalise)
Add the trailing bracket to validate the json.
finalise_group(*label='benchmark'*, *failed_problems=None*, *unselected_minimizers=None*)[](#fitbenchmarking.utils.checkpoint.Checkpoint.finalise_group)
Combine the problem and results file into the main checkpoint file.
load()[](#fitbenchmarking.utils.checkpoint.Checkpoint.load)
Load fitting results from a checkpoint file along with failed problems and unselected minimizers.
Returns Instantiated fitting results,
unselected minimisers, failed problems
Return type Tuple[ dict[str, list[[FittingResult](index.html#fitbenchmarking.utils.fitbm_result.FittingResult)]],
dict,
dict[str, list[str]]]
######### fitbenchmarking.utils.create_dirs module[](#module-fitbenchmarking.utils.create_dirs)
Utility functions for creating/deleting directories.
fitbenchmarking.utils.create_dirs.css(*results_dir*)[](#fitbenchmarking.utils.create_dirs.css)
Creates a local css directory inside the results directory.
Parameters
**support_pages_dir** (*str*) – path to the results directory
Returns path to the local css directory
Return type str
fitbenchmarking.utils.create_dirs.del_contents_of_dir(*directory*)[](#fitbenchmarking.utils.create_dirs.del_contents_of_dir)
Delete contents of a directory, including other directories.
Parameters
**directory** (*str*) – the target directory
fitbenchmarking.utils.create_dirs.figures(*support_pages_dir*)[](#fitbenchmarking.utils.create_dirs.figures)
Creates the figures directory inside the support_pages directory.
Parameters
**support_pages_dir** (*str*) – path to the support pages directory
Returns path to the figures directory
Return type str
fitbenchmarking.utils.create_dirs.group_results(*results_dir*, *group_name*)[](#fitbenchmarking.utils.create_dirs.group_results)
Creates the results directory for a specific group.
e.g. fitbenchmarking/results/Neutron/
Parameters
* **results_dir** (*str*) – path to directory that holds all the results
* **group_name** (*str*) – name of the problem group
Returns path to folder group specific results dir
Return type str
fitbenchmarking.utils.create_dirs.results(*results_dir*)[](#fitbenchmarking.utils.create_dirs.results)
Creates the results folder in the working directory.
Parameters
**results_dir** (*str*) – path to the results directory, results dir name
Returns proper path to the results directory
Return type str
fitbenchmarking.utils.create_dirs.support_pages(*group_results_dir*)[](#fitbenchmarking.utils.create_dirs.support_pages)
Creates the support_pages directory in the group results directory.
Parameters
**group_results_dir** (*str*) – path to the group results directory
Returns path to the figures directory
Return type str
######### fitbenchmarking.utils.debug module[](#module-fitbenchmarking.utils.debug)
Functions used for debugging and printing information in a readable format.
fitbenchmarking.utils.debug.get_printable_table(*class_name: str*, *class_info: dict*) → str[](#fitbenchmarking.utils.debug.get_printable_table)
Creates and returns a string displaying the class info in a format that is easily read in a table.
Parameters
* **class_name** (*str*) – The name of a class owning the data.
* **class_info** (*dict{str: variant}*) – The data in the class to display.
Returns The class info in a readable format.
Return type str
######### fitbenchmarking.utils.exceptions module[](#module-fitbenchmarking.utils.exceptions)
This file holds all FitBenchmarking exceptions, organised by exception id
*exception* fitbenchmarking.utils.exceptions.CheckpointError(*message=''*)[](#fitbenchmarking.utils.exceptions.CheckpointError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates an error occured during checkpointing.
class_message *= 'An error occured during handling the checkpoint file.'*[](#fitbenchmarking.utils.exceptions.CheckpointError.class_message)
error_code *= 28*[](#fitbenchmarking.utils.exceptions.CheckpointError.error_code)
*exception* fitbenchmarking.utils.exceptions.ControllerAttributeError(*message=''*)[](#fitbenchmarking.utils.exceptions.ControllerAttributeError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates an issue with the attributes within a controller
class_message *= 'Error in the controller attributes.'*[](#fitbenchmarking.utils.exceptions.ControllerAttributeError.class_message)
error_code *= 7*[](#fitbenchmarking.utils.exceptions.ControllerAttributeError.error_code)
*exception* fitbenchmarking.utils.exceptions.CostFuncError(*message=''*)[](#fitbenchmarking.utils.exceptions.CostFuncError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates a problem with the cost function class.
class_message *= 'FitBenchmarking ran with no results'*[](#fitbenchmarking.utils.exceptions.CostFuncError.class_message)
error_code *= 16*[](#fitbenchmarking.utils.exceptions.CostFuncError.error_code)
*exception* fitbenchmarking.utils.exceptions.FilepathTooLongError(*message=''*)[](#fitbenchmarking.utils.exceptions.FilepathTooLongError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates the filepath to save a file to is too long.
class_message *= 'The filepath for saving a file is too long.'*[](#fitbenchmarking.utils.exceptions.FilepathTooLongError.class_message)
error_code *= 25*[](#fitbenchmarking.utils.exceptions.FilepathTooLongError.error_code)
*exception* fitbenchmarking.utils.exceptions.FitBenchmarkException(*message=''*)[](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Bases: `Exception`
The base class for all FitBenchmarking exceptions
To define a new exception, inherrit from this and override the
_class_message
class_message *= 'An unknown exception occurred.'*[](#fitbenchmarking.utils.exceptions.FitBenchmarkException.class_message)
error_code *= 1*[](#fitbenchmarking.utils.exceptions.FitBenchmarkException.error_code)
*exception* fitbenchmarking.utils.exceptions.FittingProblemError(*message=''*)[](#fitbenchmarking.utils.exceptions.FittingProblemError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates a problem with the fitting problem.
class_message *= 'Fitting Problem raised and exception.'*[](#fitbenchmarking.utils.exceptions.FittingProblemError.class_message)
error_code *= 10*[](#fitbenchmarking.utils.exceptions.FittingProblemError.error_code)
*exception* fitbenchmarking.utils.exceptions.IncompatibleCostFunctionError(*message=''*)[](#fitbenchmarking.utils.exceptions.IncompatibleCostFunctionError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates that the cost function is not compatible with the given problem.
class_message *= 'The selected cost function cannot be used with the given problem'*[](#fitbenchmarking.utils.exceptions.IncompatibleCostFunctionError.class_message)
error_code *= 29*[](#fitbenchmarking.utils.exceptions.IncompatibleCostFunctionError.error_code)
*exception* fitbenchmarking.utils.exceptions.IncompatibleHessianError(*message=''*)[](#fitbenchmarking.utils.exceptions.IncompatibleHessianError)
Bases: [`fitbenchmarking.utils.exceptions.ValidationException`](#fitbenchmarking.utils.exceptions.ValidationException)
Indicates that the selected Hessian method is not compatible with selected options/problem set
class_message *= 'The provided Hessian method cannot be used with the selected options or problem set.'*[](#fitbenchmarking.utils.exceptions.IncompatibleHessianError.class_message)
error_code *= 26*[](#fitbenchmarking.utils.exceptions.IncompatibleHessianError.error_code)
*exception* fitbenchmarking.utils.exceptions.IncompatibleJacobianError(*message=''*)[](#fitbenchmarking.utils.exceptions.IncompatibleJacobianError)
Bases: [`fitbenchmarking.utils.exceptions.ValidationException`](#fitbenchmarking.utils.exceptions.ValidationException)
Indicates that the selected jacobian method is not compatible with selected options/problem set
class_message *= 'The provided Jacobian method cannot be used with the selected options or problem set.'*[](#fitbenchmarking.utils.exceptions.IncompatibleJacobianError.class_message)
error_code *= 24*[](#fitbenchmarking.utils.exceptions.IncompatibleJacobianError.error_code)
*exception* fitbenchmarking.utils.exceptions.IncompatibleMinimizerError(*message=''*)[](#fitbenchmarking.utils.exceptions.IncompatibleMinimizerError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates that the selected minimizer is not compatible with selected options/problem set
class_message *= 'Minimizer cannot be used with selected options/problem set'*[](#fitbenchmarking.utils.exceptions.IncompatibleMinimizerError.class_message)
error_code *= 17*[](#fitbenchmarking.utils.exceptions.IncompatibleMinimizerError.error_code)
*exception* fitbenchmarking.utils.exceptions.IncompatibleProblemError(*message=''*)[](#fitbenchmarking.utils.exceptions.IncompatibleProblemError)
Bases: [`fitbenchmarking.utils.exceptions.ValidationException`](#fitbenchmarking.utils.exceptions.ValidationException)
Indicates that the selected problem is not compatible with the selected options.
class_message *= 'The selected software can not be used with the given problem.'*[](#fitbenchmarking.utils.exceptions.IncompatibleProblemError.class_message)
error_code *= 27*[](#fitbenchmarking.utils.exceptions.IncompatibleProblemError.error_code)
*exception* fitbenchmarking.utils.exceptions.IncompatibleTableError(*message=''*)[](#fitbenchmarking.utils.exceptions.IncompatibleTableError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates that selected cost function and table are not compatible
class_message *= 'The table type selected is not compatible with the selected cost function'*[](#fitbenchmarking.utils.exceptions.IncompatibleTableError.class_message)
error_code *= 18*[](#fitbenchmarking.utils.exceptions.IncompatibleTableError.error_code)
*exception* fitbenchmarking.utils.exceptions.IncorrectBoundsError(*message=''*)[](#fitbenchmarking.utils.exceptions.IncorrectBoundsError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates that parameter_ranges have been set incorrectly
class_message *= 'Bounds for this problem are unable to be set, so this problem will be skipped.'*[](#fitbenchmarking.utils.exceptions.IncorrectBoundsError.class_message)
error_code *= 19*[](#fitbenchmarking.utils.exceptions.IncorrectBoundsError.error_code)
*exception* fitbenchmarking.utils.exceptions.MaxRuntimeError(*message=''*)[](#fitbenchmarking.utils.exceptions.MaxRuntimeError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates a minimizer has taken too long to run
class_message *= 'Minimizer runtime exceeded maximum runtime'*[](#fitbenchmarking.utils.exceptions.MaxRuntimeError.class_message)
error_code *= 23*[](#fitbenchmarking.utils.exceptions.MaxRuntimeError.error_code)
*exception* fitbenchmarking.utils.exceptions.MissingBoundsError(*message=''*)[](#fitbenchmarking.utils.exceptions.MissingBoundsError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates that parameter_ranges have not been set but are required
class_message *= 'Bounds on all parameters are required to use this software.'*[](#fitbenchmarking.utils.exceptions.MissingBoundsError.class_message)
error_code *= 20*[](#fitbenchmarking.utils.exceptions.MissingBoundsError.error_code)
*exception* fitbenchmarking.utils.exceptions.MissingSoftwareError(*message=''*)[](#fitbenchmarking.utils.exceptions.MissingSoftwareError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates that the requirements for a software package are not available.
class_message *= 'Missing dependencies for fit.'*[](#fitbenchmarking.utils.exceptions.MissingSoftwareError.class_message)
error_code *= 5*[](#fitbenchmarking.utils.exceptions.MissingSoftwareError.error_code)
*exception* fitbenchmarking.utils.exceptions.NoAnalyticJacobian(*message=''*)[](#fitbenchmarking.utils.exceptions.NoAnalyticJacobian)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates when no Jacobian data files can be found
class_message *= 'Could not find Jacobian data files'*[](#fitbenchmarking.utils.exceptions.NoAnalyticJacobian.class_message)
error_code *= 12*[](#fitbenchmarking.utils.exceptions.NoAnalyticJacobian.error_code)
*exception* fitbenchmarking.utils.exceptions.NoControllerError(*message=''*)[](#fitbenchmarking.utils.exceptions.NoControllerError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates a controller could not be found
class_message *= 'Could not find controller.'*[](#fitbenchmarking.utils.exceptions.NoControllerError.class_message)
error_code *= 6*[](#fitbenchmarking.utils.exceptions.NoControllerError.error_code)
*exception* fitbenchmarking.utils.exceptions.NoDataError(*message=''*)[](#fitbenchmarking.utils.exceptions.NoDataError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates that no data could be found.
class_message *= 'No data found.'*[](#fitbenchmarking.utils.exceptions.NoDataError.class_message)
error_code *= 8*[](#fitbenchmarking.utils.exceptions.NoDataError.error_code)
*exception* fitbenchmarking.utils.exceptions.NoHessianError(*message=''*)[](#fitbenchmarking.utils.exceptions.NoHessianError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicated a problem with Hessian import
class_message *= 'Could not find Hessian class'*[](#fitbenchmarking.utils.exceptions.NoHessianError.class_message)
error_code *= 22*[](#fitbenchmarking.utils.exceptions.NoHessianError.error_code)
*exception* fitbenchmarking.utils.exceptions.NoJacobianError(*message=''*)[](#fitbenchmarking.utils.exceptions.NoJacobianError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates a problem with the Jacobian import.
class_message *= 'Could not find Jacobian class'*[](#fitbenchmarking.utils.exceptions.NoJacobianError.class_message)
error_code *= 11*[](#fitbenchmarking.utils.exceptions.NoJacobianError.error_code)
*exception* fitbenchmarking.utils.exceptions.NoParserError(*message=''*)[](#fitbenchmarking.utils.exceptions.NoParserError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates a parser could not be found.
class_message *= 'Could not find parser.'*[](#fitbenchmarking.utils.exceptions.NoParserError.class_message)
error_code *= 4*[](#fitbenchmarking.utils.exceptions.NoParserError.error_code)
*exception* fitbenchmarking.utils.exceptions.NoResultsError(*message=''*)[](#fitbenchmarking.utils.exceptions.NoResultsError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates a problem with the fitting problem.
class_message *= 'FitBenchmarking ran with no results'*[](#fitbenchmarking.utils.exceptions.NoResultsError.class_message)
error_code *= 14*[](#fitbenchmarking.utils.exceptions.NoResultsError.error_code)
*exception* fitbenchmarking.utils.exceptions.OptionsError(*message=''*)[](#fitbenchmarking.utils.exceptions.OptionsError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates an error during processing options.
class_message *= 'Failed to process options.'*[](#fitbenchmarking.utils.exceptions.OptionsError.class_message)
error_code *= 2*[](#fitbenchmarking.utils.exceptions.OptionsError.error_code)
*exception* fitbenchmarking.utils.exceptions.ParsingError(*message=''*)[](#fitbenchmarking.utils.exceptions.ParsingError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates an error during parsing.
class_message *= 'Could not parse problem.'*[](#fitbenchmarking.utils.exceptions.ParsingError.class_message)
error_code *= 3*[](#fitbenchmarking.utils.exceptions.ParsingError.error_code)
*exception* fitbenchmarking.utils.exceptions.PlottingError(*message=''*)[](#fitbenchmarking.utils.exceptions.PlottingError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates an error during plotting results
class_message *= 'An error occurred during plotting.'*[](#fitbenchmarking.utils.exceptions.PlottingError.class_message)
error_code *= 21*[](#fitbenchmarking.utils.exceptions.PlottingError.error_code)
*exception* fitbenchmarking.utils.exceptions.UnknownMinimizerError(*message=''*)[](#fitbenchmarking.utils.exceptions.UnknownMinimizerError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates that the controller does not support a given minimizer given the current “algorithm_type” option set.
class_message *= 'Minimizer cannot be run with Controller with current "algorithm_type" option set.'*[](#fitbenchmarking.utils.exceptions.UnknownMinimizerError.class_message)
error_code *= 9*[](#fitbenchmarking.utils.exceptions.UnknownMinimizerError.error_code)
*exception* fitbenchmarking.utils.exceptions.UnknownTableError(*message=''*)[](#fitbenchmarking.utils.exceptions.UnknownTableError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates a problem with the fitting problem.
class_message *= 'Set table option could not be found'*[](#fitbenchmarking.utils.exceptions.UnknownTableError.class_message)
error_code *= 13*[](#fitbenchmarking.utils.exceptions.UnknownTableError.error_code)
*exception* fitbenchmarking.utils.exceptions.UnsupportedMinimizerError(*message=''*)[](#fitbenchmarking.utils.exceptions.UnsupportedMinimizerError)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
Indicates that the controller does not support a given minimizer.
class_message *= 'FitBenchmarking ran with no results'*[](#fitbenchmarking.utils.exceptions.UnsupportedMinimizerError.class_message)
error_code *= 15*[](#fitbenchmarking.utils.exceptions.UnsupportedMinimizerError.error_code)
*exception* fitbenchmarking.utils.exceptions.ValidationException(*message=''*)[](#fitbenchmarking.utils.exceptions.ValidationException)
Bases: [`fitbenchmarking.utils.exceptions.FitBenchmarkException`](#fitbenchmarking.utils.exceptions.FitBenchmarkException)
This should be subclassed to indicate any validation errors that preclude a combination from running.
class_message *= 'An error occured while verifying controller.'*[](#fitbenchmarking.utils.exceptions.ValidationException.class_message)
######### fitbenchmarking.utils.fitbm_result module[](#module-fitbenchmarking.utils.fitbm_result)
FitBenchmarking results object
*class* fitbenchmarking.utils.fitbm_result.FittingResult(*controller: [Controller](index.html#fitbenchmarking.controllers.base_controller.Controller)*, *accuracy: float | list[float] = inf*, *runtimes: float | list[float] = inf*, *emissions: float = inf*, *dataset: Optional[int] = None*)[](#fitbenchmarking.utils.fitbm_result.FittingResult)
Bases: `object`
Minimal definition of a class to hold results from a fitting problem test.
init_blank()[](#fitbenchmarking.utils.fitbm_result.FittingResult.init_blank)
Initialise a new blank version of the class with the required placeholder values not set during standard initialisation.
modified_minimizer_name(*with_software: bool = False*) → str[](#fitbenchmarking.utils.fitbm_result.FittingResult.modified_minimizer_name)
Get a minimizer name which contains jacobian and hessian information.
Optionally also include the software.
Parameters
**with_software** (*bool**,* *optional*) – Add software to the name, defaults to False
Returns A name for the result combination
Return type str
*property* norm_acc[](#fitbenchmarking.utils.fitbm_result.FittingResult.norm_acc)
Getting function for norm_acc attribute
Returns normalised accuracy value
Return type float
*property* norm_emissions[](#fitbenchmarking.utils.fitbm_result.FittingResult.norm_emissions)
Getting function for norm_emissions attribute
Returns normalised emissions value
Return type float
*property* norm_runtime[](#fitbenchmarking.utils.fitbm_result.FittingResult.norm_runtime)
Getting function for norm_runtime attribute
Returns normalised runtime value
Return type float
sanitised_min_name(*with_software=False*)[](#fitbenchmarking.utils.fitbm_result.FittingResult.sanitised_min_name)
Sanitise the modified minimizer name into one which can be used as a filename.
Returns sanitised name
Return type str
*property* sanitised_name[](#fitbenchmarking.utils.fitbm_result.FittingResult.sanitised_name)
Sanitise the problem name into one which can be used as a filename.
Returns sanitised name
Return type str
######### fitbenchmarking.utils.log module[](#module-fitbenchmarking.utils.log)
Utility functions to support logging for the fitbenchmarking project.
fitbenchmarking.utils.log.get_logger(*name='fitbenchmarking'*)[](#fitbenchmarking.utils.log.get_logger)
Get the unique logger for the given name.
This is a straight pass through but will be more intutive for people who have not used python logging.
Parameters
**name** (*str**,* *optional*) – Name of the logger to use, defaults to ‘fitbenchmarking’
Returns The named logger
Return type logging.Logger
fitbenchmarking.utils.log.setup_logger(*log_file='./fitbenchmarking.log'*, *name='fitbenchmarking'*, *append=False*, *level='INFO'*)[](#fitbenchmarking.utils.log.setup_logger)
Define the location and style of the log file.
Parameters
* **log_file** (*str**,* *optional*) – path to the log file, defaults to ‘./fitbenchmarking.log’
* **name** (*str**,* *optional*) – The name of the logger to run the setup for,
defaults to fitbenchmarking
* **append** (*bool**,* *optional*) – Whether to append to the log or create a new one,
defaults to False
* **level** (*str**,* *optional*) – The level of error to print, defaults to ‘INFO’
######### fitbenchmarking.utils.matlab_engine module[](#module-fitbenchmarking.utils.matlab_engine)
Manage the persistent variables withing the matlab engine
fitbenchmarking.utils.matlab_engine.add_persistent_matlab_var(*name: str*) → None[](#fitbenchmarking.utils.matlab_engine.add_persistent_matlab_var)
Manage a list of variables to keep when clearing memory.
Parameters
**name** (*str*) – The variable to add to the persistent list
fitbenchmarking.utils.matlab_engine.clear_non_persistent_matlab_vars() → None[](#fitbenchmarking.utils.matlab_engine.clear_non_persistent_matlab_vars)
Clear any non-persistent variables.
That is any variable that hasn’t been added via the above method.
fitbenchmarking.utils.matlab_engine.list_persistent_matlab_vars() → list[str][](#fitbenchmarking.utils.matlab_engine.list_persistent_matlab_vars)
Return a list of all persistent variables
######### fitbenchmarking.utils.misc module[](#module-fitbenchmarking.utils.misc)
Miscellaneous functions and utilities used in fitting benchmarking.
fitbenchmarking.utils.misc.get_css(*options*, *working_directory*)[](#fitbenchmarking.utils.misc.get_css)
Returns the path of the local css folder
Parameters
**working_directory** (*string*) – location of current directory
Returns A dictionary containing relative links to the local css directory
Return type dict of strings
fitbenchmarking.utils.misc.get_js(*options*, *working_directory*)[](#fitbenchmarking.utils.misc.get_js)
Returns the path of the local js folder
Parameters
**working_directory** (*string*) – location of current directory
Returns A dictionary containing relative links to the local js directory
Return type dict of strings
fitbenchmarking.utils.misc.get_problem_files(*data_dir*)[](#fitbenchmarking.utils.misc.get_problem_files)
Gets all the problem definition files from the specified problem set directory.
Parameters
**data_dir** (*str*) – directory containing the problems
Returns array containing of paths to the problems e.g. In NIST we would have
[low_difficulty/file1.txt, …, …]
Return type list of str
######### fitbenchmarking.utils.options module[](#module-fitbenchmarking.utils.options)
This file will handle all interaction with the options configuration file.
*class* fitbenchmarking.utils.options.Options(*file_name=None*, *additional_options=None*)[](#fitbenchmarking.utils.options.Options)
Bases: `object`
An options class to store and handle all options for fitbenchmarking
DEFAULTS *= {'FITTING': {'algorithm_type': ['all'], 'cost_func_type': ['weighted_nlls'], 'hes_method': ['default'], 'jac_method': ['scipy'], 'max_runtime': 600, 'num_runs': 5, 'software': ['scipy', 'scipy_ls']}, 'HESSIAN': {'analytic': ['default'], 'default': ['default'], 'numdifftools': ['central'], 'scipy': ['2-point']}, 'JACOBIAN': {'analytic': ['default'], 'default': ['default'], 'numdifftools': ['central'], 'scipy': ['2-point']}, 'LOGGING': {'append': False, 'external_output': 'log_only', 'file_name': 'fitbenchmarking.log', 'level': 'INFO'}, 'MINIMIZERS': {'bumps': ['amoeba', 'lm-bumps', 'newton', 'scipy-leastsq'], 'ceres': ['Levenberg_Marquardt', 'Dogleg', 'BFGS', 'LBFGS', 'steepest_descent', 'Fletcher_Reeves', 'Polak_Ribiere', 'Hestenes_Stiefel'], 'dfo': ['dfogn', 'dfols'], 'gofit': ['multistart'], 'gradient_free': ['HillClimbingOptimizer', 'RepulsingHillClimbingOptimizer', 'SimulatedAnnealingOptimizer', 'RandomSearchOptimizer', 'RandomRestartHillClimbingOptimizer', 'RandomAnnealingOptimizer', 'ParallelTemperingOptimizer', 'ParticleSwarmOptimizer', 'EvolutionStrategyOptimizer'], 'gsl': ['lmsder', 'lmder', 'nmsimplex', 'nmsimplex2', 'conjugate_pr', 'conjugate_fr', 'vector_bfgs', 'vector_bfgs2', 'steepest_descent'], 'horace': ['lm-lsqr'], 'levmar': ['levmar'], 'lmfit': ['powell', 'cobyla', 'slsqp', 'nelder', 'least_squares', 'leastsq', 'newton', 'tnc', 'lbfgsb', 'bfgs', 'cg', 'ampgo'], 'mantid': ['BFGS', 'Conjugate gradient (Fletcher-Reeves imp.)', 'Conjugate gradient (Polak-Ribiere imp.)', 'Damped GaussNewton', 'Levenberg-Marquardt', 'Levenberg-MarquardtMD', 'Simplex', 'SteepestDescent', 'Trust Region'], 'matlab': ['Nelder-Mead Simplex'], 'matlab_curve': ['Levenberg-Marquardt', 'Trust-Region'], 'matlab_opt': ['levenberg-marquardt', 'trust-region-reflective'], 'matlab_stats': ['Levenberg-Marquardt'], 'minuit': ['migrad', 'simplex'], 'nlopt': ['LN_BOBYQA', 'LN_NEWUOA', 'LN_NEWUOA_BOUND', 'LN_PRAXIS', 'LD_SLSQP', 'LD_VAR2', 'LD_VAR1', 'AUGLAG', 'AUGLAG_EQ', 'LN_NELDERMEAD', 'LN_SBPLX', 'LN_COBYLA', 'LD_CCSAQ', 'LD_MMA', 'LD_TNEWTON_PRECOND_RESTART', 'LD_TNEWTON_PRECOND', 'LD_TNEWTON_RESTART', 'LD_TNEWTON', 'LD_LBFGS'], 'ralfit': ['gn', 'gn_reg', 'hybrid', 'hybrid_reg', 'newton', 'newton_reg'], 'scipy': ['Nelder-Mead', 'Powell', 'CG', 'BFGS', 'Newton-CG', 'L-BFGS-B', 'TNC', 'SLSQP', 'COBYLA'], 'scipy_go': ['differential_evolution', 'dual_annealing'], 'scipy_ls': ['lm-scipy', 'trf', 'dogbox'], 'theseus': ['Levenberg_Marquardt', 'Gauss-Newton']}, 'OUTPUT': {'checkpoint_filename': 'checkpoint.json', 'cmap_range': [0.2, 0.8], 'colour_map': 'magma_r', 'colour_ulim': 100, 'comparison_mode': 'both', 'make_plots': True, 'pbar': True, 'results_browser': True, 'results_dir': 'fitbenchmarking_results', 'run_name': '', 'table_type': ['acc', 'runtime', 'compare', 'local_min', 'emissions']}}*[](#fitbenchmarking.utils.options.Options.DEFAULTS)
DEFAULT_FITTING *= {'algorithm_type': ['all'], 'cost_func_type': ['weighted_nlls'], 'hes_method': ['default'], 'jac_method': ['scipy'], 'max_runtime': 600, 'num_runs': 5, 'software': ['scipy', 'scipy_ls']}*[](#fitbenchmarking.utils.options.Options.DEFAULT_FITTING)
DEFAULT_HESSIAN *= {'analytic': ['default'], 'default': ['default'], 'numdifftools': ['central'], 'scipy': ['2-point']}*[](#fitbenchmarking.utils.options.Options.DEFAULT_HESSIAN)
DEFAULT_JACOBIAN *= {'analytic': ['default'], 'default': ['default'], 'numdifftools': ['central'], 'scipy': ['2-point']}*[](#fitbenchmarking.utils.options.Options.DEFAULT_JACOBIAN)
DEFAULT_LOGGING *= {'append': False, 'external_output': 'log_only', 'file_name': 'fitbenchmarking.log', 'level': 'INFO'}*[](#fitbenchmarking.utils.options.Options.DEFAULT_LOGGING)
DEFAULT_MINIMZERS *= {'bumps': ['amoeba', 'lm-bumps', 'newton', 'scipy-leastsq'], 'ceres': ['Levenberg_Marquardt', 'Dogleg', 'BFGS', 'LBFGS', 'steepest_descent', 'Fletcher_Reeves', 'Polak_Ribiere', 'Hestenes_Stiefel'], 'dfo': ['dfogn', 'dfols'], 'gofit': ['multistart'], 'gradient_free': ['HillClimbingOptimizer', 'RepulsingHillClimbingOptimizer', 'SimulatedAnnealingOptimizer', 'RandomSearchOptimizer', 'RandomRestartHillClimbingOptimizer', 'RandomAnnealingOptimizer', 'ParallelTemperingOptimizer', 'ParticleSwarmOptimizer', 'EvolutionStrategyOptimizer'], 'gsl': ['lmsder', 'lmder', 'nmsimplex', 'nmsimplex2', 'conjugate_pr', 'conjugate_fr', 'vector_bfgs', 'vector_bfgs2', 'steepest_descent'], 'horace': ['lm-lsqr'], 'levmar': ['levmar'], 'lmfit': ['powell', 'cobyla', 'slsqp', 'nelder', 'least_squares', 'leastsq', 'newton', 'tnc', 'lbfgsb', 'bfgs', 'cg', 'ampgo'], 'mantid': ['BFGS', 'Conjugate gradient (Fletcher-Reeves imp.)', 'Conjugate gradient (Polak-Ribiere imp.)', 'Damped GaussNewton', 'Levenberg-Marquardt', 'Levenberg-MarquardtMD', 'Simplex', 'SteepestDescent', 'Trust Region'], 'matlab': ['Nelder-Mead Simplex'], 'matlab_curve': ['Levenberg-Marquardt', 'Trust-Region'], 'matlab_opt': ['levenberg-marquardt', 'trust-region-reflective'], 'matlab_stats': ['Levenberg-Marquardt'], 'minuit': ['migrad', 'simplex'], 'nlopt': ['LN_BOBYQA', 'LN_NEWUOA', 'LN_NEWUOA_BOUND', 'LN_PRAXIS', 'LD_SLSQP', 'LD_VAR2', 'LD_VAR1', 'AUGLAG', 'AUGLAG_EQ', 'LN_NELDERMEAD', 'LN_SBPLX', 'LN_COBYLA', 'LD_CCSAQ', 'LD_MMA', 'LD_TNEWTON_PRECOND_RESTART', 'LD_TNEWTON_PRECOND', 'LD_TNEWTON_RESTART', 'LD_TNEWTON', 'LD_LBFGS'], 'ralfit': ['gn', 'gn_reg', 'hybrid', 'hybrid_reg', 'newton', 'newton_reg'], 'scipy': ['Nelder-Mead', 'Powell', 'CG', 'BFGS', 'Newton-CG', 'L-BFGS-B', 'TNC', 'SLSQP', 'COBYLA'], 'scipy_go': ['differential_evolution', 'dual_annealing'], 'scipy_ls': ['lm-scipy', 'trf', 'dogbox'], 'theseus': ['Levenberg_Marquardt', 'Gauss-Newton']}*[](#fitbenchmarking.utils.options.Options.DEFAULT_MINIMZERS)
DEFAULT_OUTPUT *= {'checkpoint_filename': 'checkpoint.json', 'cmap_range': [0.2, 0.8], 'colour_map': 'magma_r', 'colour_ulim': 100, 'comparison_mode': 'both', 'make_plots': True, 'pbar': True, 'results_browser': True, 'results_dir': 'fitbenchmarking_results', 'run_name': '', 'table_type': ['acc', 'runtime', 'compare', 'local_min', 'emissions']}*[](#fitbenchmarking.utils.options.Options.DEFAULT_OUTPUT)
VALID *= {'FITTING': {'algorithm_type': ['all', 'ls', 'deriv_free', 'general', 'simplex', 'trust_region', 'levenberg-marquardt', 'gauss_newton', 'bfgs', 'conjugate_gradient', 'steepest_descent', 'global_optimization'], 'cost_func_type': ['nlls', 'weighted_nlls', 'hellinger_nlls', 'poisson'], 'hes_method': ['scipy', 'analytic', 'default', 'numdifftools'], 'jac_method': ['scipy', 'analytic', 'default', 'numdifftools'], 'software': ['bumps', 'ceres', 'dfo', 'gofit', 'gradient_free', 'gsl', 'horace', 'levmar', 'lmfit', 'mantid', 'matlab', 'matlab_curve', 'matlab_opt', 'matlab_stats', 'minuit', 'nlopt', 'ralfit', 'scipy', 'scipy_ls', 'scipy_go', 'theseus']}, 'HESSIAN': {'analytic': ['default'], 'default': ['default'], 'numdifftools': ['central', 'complex', 'multicomplex', 'forward', 'backward'], 'scipy': ['2-point', '3-point', 'cs']}, 'JACOBIAN': {'analytic': ['default'], 'default': ['default'], 'numdifftools': ['central', 'complex', 'multicomplex', 'forward', 'backward'], 'scipy': ['2-point', '3-point', 'cs']}, 'LOGGING': {'append': [True, False], 'external_output': ['debug', 'display', 'log_only'], 'level': ['NOTSET', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']}, 'MINIMIZERS': {'bumps': ['amoeba', 'lm-bumps', 'newton', 'de', 'scipy-leastsq', 'dream'], 'ceres': ['Levenberg_Marquardt', 'Dogleg', 'BFGS', 'LBFGS', 'steepest_descent', 'Fletcher_Reeves', 'Polak_Ribiere', 'Hestenes_Stiefel'], 'dfo': ['dfogn', 'dfols'], 'gofit': ['alternating', 'multistart', 'regularisation'], 'gradient_free': ['HillClimbingOptimizer', 'RepulsingHillClimbingOptimizer', 'SimulatedAnnealingOptimizer', 'RandomSearchOptimizer', 'RandomRestartHillClimbingOptimizer', 'RandomAnnealingOptimizer', 'ParallelTemperingOptimizer', 'ParticleSwarmOptimizer', 'EvolutionStrategyOptimizer', 'BayesianOptimizer', 'TreeStructuredParzenEstimators', 'DecisionTreeOptimizer'], 'gsl': ['lmsder', 'lmder', 'nmsimplex', 'nmsimplex2', 'conjugate_pr', 'conjugate_fr', 'vector_bfgs', 'vector_bfgs2', 'steepest_descent'], 'horace': ['lm-lsqr'], 'levmar': ['levmar'], 'lmfit': ['differential_evolution', 'brute', 'basinhopping', 'powell', 'cobyla', 'slsqp', 'emcee', 'nelder', 'least_squares', 'trust-ncg', 'trust-exact', 'trust-krylov', 'trust-constr', 'dogleg', 'leastsq', 'newton', 'tnc', 'lbfgsb', 'bfgs', 'cg', 'ampgo', 'shgo', 'dual_annealing'], 'mantid': ['BFGS', 'Conjugate gradient (Fletcher-Reeves imp.)', 'Conjugate gradient (Polak-Ribiere imp.)', 'Damped GaussNewton', 'Levenberg-Marquardt', 'Levenberg-MarquardtMD', 'Simplex', 'SteepestDescent', 'Trust Region', 'FABADA'], 'matlab': ['Nelder-Mead Simplex'], 'matlab_curve': ['Levenberg-Marquardt', 'Trust-Region'], 'matlab_opt': ['levenberg-marquardt', 'trust-region-reflective'], 'matlab_stats': ['Levenberg-Marquardt'], 'minuit': ['migrad', 'simplex'], 'nlopt': ['LN_BOBYQA', 'LN_NEWUOA', 'LN_NEWUOA_BOUND', 'LN_PRAXIS', 'LD_SLSQP', 'LD_VAR2', 'LD_VAR1', 'AUGLAG', 'AUGLAG_EQ', 'LN_NELDERMEAD', 'LN_SBPLX', 'LN_COBYLA', 'LD_CCSAQ', 'LD_MMA', 'LD_TNEWTON_PRECOND_RESTART', 'LD_TNEWTON_PRECOND', 'LD_TNEWTON_RESTART', 'LD_TNEWTON', 'LD_LBFGS', 'GN_DIRECT', 'GN_DIRECT_L', 'GN_DIRECT_L_RAND', 'GNL_DIRECT_NOSCAL', 'GN_DIRECT_L_NOSCAL', 'GN_DIRECT_L_RAND_NOSCAL', 'GN_ORIG_DIRECT', 'GN_ORIG_DIRECT_L', 'GN_CRS2_LM', 'G_MLSL_LDS', 'G_MLSL', 'GD_STOGO', 'GD_STOGO_RAND', 'GN_AGS', 'GN_ISRES'], 'ralfit': ['gn', 'gn_reg', 'hybrid', 'hybrid_reg', 'newton', 'newton_reg', 'newton-tensor', 'newton-tensor_reg'], 'scipy': ['Nelder-Mead', 'Powell', 'CG', 'BFGS', 'Newton-CG', 'L-BFGS-B', 'TNC', 'SLSQP', 'COBYLA', 'trust-ncg', 'trust-exact', 'trust-krylov', 'trust-constr', 'dogleg'], 'scipy_go': ['differential_evolution', 'shgo', 'dual_annealing'], 'scipy_ls': ['lm-scipy', 'trf', 'dogbox'], 'theseus': ['Levenberg_Marquardt', 'Gauss-Newton']}, 'OUTPUT': {'append': [True, False], 'colour_map': ['magma', 'inferno', 'plasma', 'viridis', 'cividis', 'twilight', 'twilight_shifted', 'turbo', 'Blues', 'BrBG', 'BuGn', 'BuPu', 'CMRmap', 'GnBu', 'Greens', 'Greys', 'OrRd', 'Oranges', 'PRGn', 'PiYG', 'PuBu', 'PuBuGn', 'PuOr', 'PuRd', 'Purples', 'RdBu', 'RdGy', 'RdPu', 'RdYlBu', 'RdYlGn', 'Reds', 'Spectral', 'Wistia', 'YlGn', 'YlGnBu', 'YlOrBr', 'YlOrRd', 'afmhot', 'autumn', 'binary', 'bone', 'brg', 'bwr', 'cool', 'coolwarm', 'copper', 'cubehelix', 'flag', 'gist_earth', 'gist_gray', 'gist_heat', 'gist_ncar', 'gist_rainbow', 'gist_stern', 'gist_yarg', 'gnuplot', 'gnuplot2', 'gray', 'hot', 'hsv', 'jet', 'nipy_spectral', 'ocean', 'pink', 'prism', 'rainbow', 'seismic', 'spring', 'summer', 'terrain', 'winter', 'Accent', 'Dark2', 'Paired', 'Pastel1', 'Pastel2', 'Set1', 'Set2', 'Set3', 'tab10', 'tab20', 'tab20b', 'tab20c', 'magma_r', 'inferno_r', 'plasma_r', 'viridis_r', 'cividis_r', 'twilight_r', 'twilight_shifted_r', 'turbo_r', 'Blues_r', 'BrBG_r', 'BuGn_r', 'BuPu_r', 'CMRmap_r', 'GnBu_r', 'Greens_r', 'Greys_r', 'OrRd_r', 'Oranges_r', 'PRGn_r', 'PiYG_r', 'PuBu_r', 'PuBuGn_r', 'PuOr_r', 'PuRd_r', 'Purples_r', 'RdBu_r', 'RdGy_r', 'RdPu_r', 'RdYlBu_r', 'RdYlGn_r', 'Reds_r', 'Spectral_r', 'Wistia_r', 'YlGn_r', 'YlGnBu_r', 'YlOrBr_r', 'YlOrRd_r', 'afmhot_r', 'autumn_r', 'binary_r', 'bone_r', 'brg_r', 'bwr_r', 'cool_r', 'coolwarm_r', 'copper_r', 'cubehelix_r', 'flag_r', 'gist_earth_r', 'gist_gray_r', 'gist_heat_r', 'gist_ncar_r', 'gist_rainbow_r', 'gist_stern_r', 'gist_yarg_r', 'gnuplot_r', 'gnuplot2_r', 'gray_r', 'hot_r', 'hsv_r', 'jet_r', 'nipy_spectral_r', 'ocean_r', 'pink_r', 'prism_r', 'rainbow_r', 'seismic_r', 'spring_r', 'summer_r', 'terrain_r', 'winter_r', 'Accent_r', 'Dark2_r', 'Paired_r', 'Pastel1_r', 'Pastel2_r', 'Set1_r', 'Set2_r', 'Set3_r', 'tab10_r', 'tab20_r', 'tab20b_r', 'tab20c_r'], 'comparison_mode': ['abs', 'rel', 'both'], 'external_output': ['debug', 'display', 'log_only'], 'level': ['NOTSET', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'], 'make_plots': [True, False], 'pbar': [True, False], 'results_browser': [True, False], 'table_type': ['acc', 'runtime', 'compare', 'local_min', 'emissions']}}*[](#fitbenchmarking.utils.options.Options.VALID)
VALID_FITTING *= {'algorithm_type': ['all', 'ls', 'deriv_free', 'general', 'simplex', 'trust_region', 'levenberg-marquardt', 'gauss_newton', 'bfgs', 'conjugate_gradient', 'steepest_descent', 'global_optimization'], 'cost_func_type': ['nlls', 'weighted_nlls', 'hellinger_nlls', 'poisson'], 'hes_method': ['scipy', 'analytic', 'default', 'numdifftools'], 'jac_method': ['scipy', 'analytic', 'default', 'numdifftools'], 'software': ['bumps', 'ceres', 'dfo', 'gofit', 'gradient_free', 'gsl', 'horace', 'levmar', 'lmfit', 'mantid', 'matlab', 'matlab_curve', 'matlab_opt', 'matlab_stats', 'minuit', 'nlopt', 'ralfit', 'scipy', 'scipy_ls', 'scipy_go', 'theseus']}*[](#fitbenchmarking.utils.options.Options.VALID_FITTING)
VALID_HESSIAN *= {'analytic': ['default'], 'default': ['default'], 'numdifftools': ['central', 'complex', 'multicomplex', 'forward', 'backward'], 'scipy': ['2-point', '3-point', 'cs']}*[](#fitbenchmarking.utils.options.Options.VALID_HESSIAN)
VALID_JACOBIAN *= {'analytic': ['default'], 'default': ['default'], 'numdifftools': ['central', 'complex', 'multicomplex', 'forward', 'backward'], 'scipy': ['2-point', '3-point', 'cs']}*[](#fitbenchmarking.utils.options.Options.VALID_JACOBIAN)
VALID_LOGGING *= {'append': [True, False], 'external_output': ['debug', 'display', 'log_only'], 'level': ['NOTSET', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']}*[](#fitbenchmarking.utils.options.Options.VALID_LOGGING)
VALID_MINIMIZERS *= {'bumps': ['amoeba', 'lm-bumps', 'newton', 'de', 'scipy-leastsq', 'dream'], 'ceres': ['Levenberg_Marquardt', 'Dogleg', 'BFGS', 'LBFGS', 'steepest_descent', 'Fletcher_Reeves', 'Polak_Ribiere', 'Hestenes_Stiefel'], 'dfo': ['dfogn', 'dfols'], 'gofit': ['alternating', 'multistart', 'regularisation'], 'gradient_free': ['HillClimbingOptimizer', 'RepulsingHillClimbingOptimizer', 'SimulatedAnnealingOptimizer', 'RandomSearchOptimizer', 'RandomRestartHillClimbingOptimizer', 'RandomAnnealingOptimizer', 'ParallelTemperingOptimizer', 'ParticleSwarmOptimizer', 'EvolutionStrategyOptimizer', 'BayesianOptimizer', 'TreeStructuredParzenEstimators', 'DecisionTreeOptimizer'], 'gsl': ['lmsder', 'lmder', 'nmsimplex', 'nmsimplex2', 'conjugate_pr', 'conjugate_fr', 'vector_bfgs', 'vector_bfgs2', 'steepest_descent'], 'horace': ['lm-lsqr'], 'levmar': ['levmar'], 'lmfit': ['differential_evolution', 'brute', 'basinhopping', 'powell', 'cobyla', 'slsqp', 'emcee', 'nelder', 'least_squares', 'trust-ncg', 'trust-exact', 'trust-krylov', 'trust-constr', 'dogleg', 'leastsq', 'newton', 'tnc', 'lbfgsb', 'bfgs', 'cg', 'ampgo', 'shgo', 'dual_annealing'], 'mantid': ['BFGS', 'Conjugate gradient (Fletcher-Reeves imp.)', 'Conjugate gradient (Polak-Ribiere imp.)', 'Damped GaussNewton', 'Levenberg-Marquardt', 'Levenberg-MarquardtMD', 'Simplex', 'SteepestDescent', 'Trust Region', 'FABADA'], 'matlab': ['Nelder-Mead Simplex'], 'matlab_curve': ['Levenberg-Marquardt', 'Trust-Region'], 'matlab_opt': ['levenberg-marquardt', 'trust-region-reflective'], 'matlab_stats': ['Levenberg-Marquardt'], 'minuit': ['migrad', 'simplex'], 'nlopt': ['LN_BOBYQA', 'LN_NEWUOA', 'LN_NEWUOA_BOUND', 'LN_PRAXIS', 'LD_SLSQP', 'LD_VAR2', 'LD_VAR1', 'AUGLAG', 'AUGLAG_EQ', 'LN_NELDERMEAD', 'LN_SBPLX', 'LN_COBYLA', 'LD_CCSAQ', 'LD_MMA', 'LD_TNEWTON_PRECOND_RESTART', 'LD_TNEWTON_PRECOND', 'LD_TNEWTON_RESTART', 'LD_TNEWTON', 'LD_LBFGS', 'GN_DIRECT', 'GN_DIRECT_L', 'GN_DIRECT_L_RAND', 'GNL_DIRECT_NOSCAL', 'GN_DIRECT_L_NOSCAL', 'GN_DIRECT_L_RAND_NOSCAL', 'GN_ORIG_DIRECT', 'GN_ORIG_DIRECT_L', 'GN_CRS2_LM', 'G_MLSL_LDS', 'G_MLSL', 'GD_STOGO', 'GD_STOGO_RAND', 'GN_AGS', 'GN_ISRES'], 'ralfit': ['gn', 'gn_reg', 'hybrid', 'hybrid_reg', 'newton', 'newton_reg', 'newton-tensor', 'newton-tensor_reg'], 'scipy': ['Nelder-Mead', 'Powell', 'CG', 'BFGS', 'Newton-CG', 'L-BFGS-B', 'TNC', 'SLSQP', 'COBYLA', 'trust-ncg', 'trust-exact', 'trust-krylov', 'trust-constr', 'dogleg'], 'scipy_go': ['differential_evolution', 'shgo', 'dual_annealing'], 'scipy_ls': ['lm-scipy', 'trf', 'dogbox'], 'theseus': ['Levenberg_Marquardt', 'Gauss-Newton']}*[](#fitbenchmarking.utils.options.Options.VALID_MINIMIZERS)
VALID_OUTPUT *= {'append': [True, False], 'colour_map': ['magma', 'inferno', 'plasma', 'viridis', 'cividis', 'twilight', 'twilight_shifted', 'turbo', 'Blues', 'BrBG', 'BuGn', 'BuPu', 'CMRmap', 'GnBu', 'Greens', 'Greys', 'OrRd', 'Oranges', 'PRGn', 'PiYG', 'PuBu', 'PuBuGn', 'PuOr', 'PuRd', 'Purples', 'RdBu', 'RdGy', 'RdPu', 'RdYlBu', 'RdYlGn', 'Reds', 'Spectral', 'Wistia', 'YlGn', 'YlGnBu', 'YlOrBr', 'YlOrRd', 'afmhot', 'autumn', 'binary', 'bone', 'brg', 'bwr', 'cool', 'coolwarm', 'copper', 'cubehelix', 'flag', 'gist_earth', 'gist_gray', 'gist_heat', 'gist_ncar', 'gist_rainbow', 'gist_stern', 'gist_yarg', 'gnuplot', 'gnuplot2', 'gray', 'hot', 'hsv', 'jet', 'nipy_spectral', 'ocean', 'pink', 'prism', 'rainbow', 'seismic', 'spring', 'summer', 'terrain', 'winter', 'Accent', 'Dark2', 'Paired', 'Pastel1', 'Pastel2', 'Set1', 'Set2', 'Set3', 'tab10', 'tab20', 'tab20b', 'tab20c', 'magma_r', 'inferno_r', 'plasma_r', 'viridis_r', 'cividis_r', 'twilight_r', 'twilight_shifted_r', 'turbo_r', 'Blues_r', 'BrBG_r', 'BuGn_r', 'BuPu_r', 'CMRmap_r', 'GnBu_r', 'Greens_r', 'Greys_r', 'OrRd_r', 'Oranges_r', 'PRGn_r', 'PiYG_r', 'PuBu_r', 'PuBuGn_r', 'PuOr_r', 'PuRd_r', 'Purples_r', 'RdBu_r', 'RdGy_r', 'RdPu_r', 'RdYlBu_r', 'RdYlGn_r', 'Reds_r', 'Spectral_r', 'Wistia_r', 'YlGn_r', 'YlGnBu_r', 'YlOrBr_r', 'YlOrRd_r', 'afmhot_r', 'autumn_r', 'binary_r', 'bone_r', 'brg_r', 'bwr_r', 'cool_r', 'coolwarm_r', 'copper_r', 'cubehelix_r', 'flag_r', 'gist_earth_r', 'gist_gray_r', 'gist_heat_r', 'gist_ncar_r', 'gist_rainbow_r', 'gist_stern_r', 'gist_yarg_r', 'gnuplot_r', 'gnuplot2_r', 'gray_r', 'hot_r', 'hsv_r', 'jet_r', 'nipy_spectral_r', 'ocean_r', 'pink_r', 'prism_r', 'rainbow_r', 'seismic_r', 'spring_r', 'summer_r', 'terrain_r', 'winter_r', 'Accent_r', 'Dark2_r', 'Paired_r', 'Pastel1_r', 'Pastel2_r', 'Set1_r', 'Set2_r', 'Set3_r', 'tab10_r', 'tab20_r', 'tab20b_r', 'tab20c_r'], 'comparison_mode': ['abs', 'rel', 'both'], 'external_output': ['debug', 'display', 'log_only'], 'level': ['NOTSET', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'], 'make_plots': [True, False], 'pbar': [True, False], 'results_browser': [True, False], 'table_type': ['acc', 'runtime', 'compare', 'local_min', 'emissions']}*[](#fitbenchmarking.utils.options.Options.VALID_OUTPUT)
VALID_SECTIONS *= ['MINIMIZERS', 'FITTING', 'JACOBIAN', 'HESSIAN', 'OUTPUT', 'LOGGING']*[](#fitbenchmarking.utils.options.Options.VALID_SECTIONS)
*property* minimizers[](#fitbenchmarking.utils.options.Options.minimizers)
Returns the minimizers in a software package
read_value(*func*, *option*, *additional_options*)[](#fitbenchmarking.utils.options.Options.read_value)
Helper function which loads in the value
Parameters
* **func** (*callable*) – configparser function
* **option** (*str*) – option to be read for file
* **additional_options** – A dictionary of options
input by the user into the command line.
:type additional_options: dict
Returns value of the option
Return type list/str/int/bool
*property* results_dir*: str*[](#fitbenchmarking.utils.options.Options.results_dir)
Returns the directory to store the results in.
write(*file_name*)[](#fitbenchmarking.utils.options.Options.write)
Write the contents of the options object to a new options file.
Parameters
**file_name** (*str*) – The path to the new options file
write_to_stream(*file_object*)[](#fitbenchmarking.utils.options.Options.write_to_stream)
Write the contents of the options object to a file object.
fitbenchmarking.utils.options.find_options_file(*options_file: str*, *additional_options: dict*) → [fitbenchmarking.utils.options.Options](index.html#fitbenchmarking.utils.options.Options)[](#fitbenchmarking.utils.options.find_options_file)
Attempts to find the options file and creates an Options object for it.
Wildcards are accepted in the parameters of this function.
Parameters
* **options_file** (*str*) – The path or glob pattern for an options file.
* **additional_options** (*dict*) – A dictionary of options input by the user into the command line.
Returns An Options object.
Return type
[fitbenchmarking.utils.options.Options](index.html#fitbenchmarking.utils.options.Options)
fitbenchmarking.utils.options.read_list(*s*)[](#fitbenchmarking.utils.options.read_list)
Utility function to allow lists to be read by the config parser
Parameters
**s** (*string*) – string to convert to a list
Returns list of items
Return type list of str
fitbenchmarking.utils.options.read_range(*s*)[](#fitbenchmarking.utils.options.read_range)
Utility function to allow ranges to be read by the config parser
Parameters
**s** (*string*) – string to convert to a list
Returns two element list [lower_lim, upper lim]
Return type list
######### fitbenchmarking.utils.output_grabber module[](#module-fitbenchmarking.utils.output_grabber)
This file will capture output that would be printed to the terminal
*class* fitbenchmarking.utils.output_grabber.OutputGrabber(*options*)[](#fitbenchmarking.utils.output_grabber.OutputGrabber)
Bases: `object`
Class used to grab outputs for stdout and stderr
*class* fitbenchmarking.utils.output_grabber.StreamGrabber(*stream*)[](#fitbenchmarking.utils.output_grabber.StreamGrabber)
Bases: `object`
Class used to grab an output stream.
escape_char *= '\x08'*[](#fitbenchmarking.utils.output_grabber.StreamGrabber.escape_char)
readOutput()[](#fitbenchmarking.utils.output_grabber.StreamGrabber.readOutput)
Read the stream data (one byte at a time)
and save the text in capturedtext.
start()[](#fitbenchmarking.utils.output_grabber.StreamGrabber.start)
Start capturing the stream data.
stop()[](#fitbenchmarking.utils.output_grabber.StreamGrabber.stop)
Stop capturing the stream data and save the text in capturedtext.
######### fitbenchmarking.utils.timer module[](#module-fitbenchmarking.utils.timer)
Implements the TimerWithMaxTime class used for checking the
‘max_runtime’ is not exceeded.
*class* fitbenchmarking.utils.timer.TimerWithMaxTime(*max_runtime: float*)[](#fitbenchmarking.utils.timer.TimerWithMaxTime)
Bases: `object`
A timer class used for checking if the ‘max_runtime’ is exceeded when executing the fits.
check_elapsed_time() → None[](#fitbenchmarking.utils.timer.TimerWithMaxTime.check_elapsed_time)
Checks whether the max runtime has been exceeded. Raises a MaxRuntimeError exception if it has been exceeded. Otherwise,
it carries on.
reset() → None[](#fitbenchmarking.utils.timer.TimerWithMaxTime.reset)
Resets the timer so it can be used for timing a different fit combination.
start() → None[](#fitbenchmarking.utils.timer.TimerWithMaxTime.start)
Starts the timer by recording the current time.
stop() → None[](#fitbenchmarking.utils.timer.TimerWithMaxTime.stop)
Stops the timer if it is timing something. The elapsed time since starting the timer is added onto the total elapsed time.
######### fitbenchmarking.utils.write_files module[](#module-fitbenchmarking.utils.write_files)
Utility decorator function for saving and writing files.
fitbenchmarking.utils.write_files.write_file(*function*)[](#fitbenchmarking.utils.write_files.write_file)
A decorator function used for catching exceptions which can happen specifically when writing files. It will log a useful error message if the problem is identified.
Parameters
**function** (*A callable function.*) – A callable function which writes files.
Returns A callable wrapped function which writes files.
Return type A callable function.
######## Module contents[](#module-fitbenchmarking.utils)
###### Module contents[](#module-fitbenchmarking) |
sense | cran | R | Package ‘sense’
October 14, 2022
Type Package
Title Automatic Stacked Ensemble for Regression Tasks
Version 1.0.0
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Stacked ensemble for regression tasks based on 'mlr3' framework with a pipeline for pre-
processing numeric and factor features and hyper-parameter tuning using grid or random search.
License GPL-3
Encoding UTF-8
LazyData true
RoxygenNote 7.1.1
Depends R (>= 4.1)
Imports mlr3 (>= 0.12.0), mlr3learners (>= 0.5.0), mlr3filters (>=
0.4.2), mlr3pipelines (>= 0.3.5-1), mlr3viz (>= 0.5.5), paradox
(>= 0.7.1), mlr3tuning (>= 0.8.0), bbotk (>= 0.3.2), tictoc (>=
1.0.1), forcats (>= 0.5.1), readr (>= 2.0.1), lubridate (>=
1.7.10), purrr (>= 0.3.4), Metrics (>= 0.1.4), data.table (>=
1.14.0), visNetwork (>= 2.0.9)
Suggests xgboost (>= 1.4.1.1), rpart (>= 4.1-15), ranger (>= 0.13.1),
kknn (>= 1.3.1), glmnet (>= 4.1-2), e1071 (>= 1.7-8), mlr3misc
(>= 0.9.3), FSelectorRcpp (>= 0.3.8), care (>= 1.1.10), praznik
(>= 8.0.0), lme4 (>= 1.1-27.1), nloptr (>= 1.2.2.2)
URL https://mlr3.mlr-org.com/
NeedsCompilation no
Repository CRAN
Date/Publication 2021-09-06 08:00:02 UTC
R topics documented:
benchmar... 2
sens... 2
benchmark benchmark data set
Description
A data frame for regression task generated with mlbench friedman1.
Usage
benchmark
Format
A data frame with 11 columns and 150 rows.
Source
mlbench, friedman1
sense sense
Description
Stacked ensamble for regression tasks based on ’mlr3’ framework.
Usage
sense(
df,
target_feat,
benchmarking = "all",
super = "avg",
algos = c("glmnet", "ranger", "xgboost", "rpart", "kknn", "svm"),
sampling_rate = 1,
metric = "mae",
collapse_char_to = 10,
num_preproc = "scale",
fct_preproc = "one-hot",
impute_num = "sample",
missing_fusion = FALSE,
inner = "holdout",
outer = "holdout",
folds = 3,
repeats = 3,
ratio = 0.5,
selected_filter = "information_gain",
selected_n_feats = NULL,
tuning = "random_search",
budget = 30,
resolution = 5,
n_evals = 30,
minute_time = 10,
patience = 0.3,
min_improve = 0.01,
java_mem = 64,
decimals = 2,
seed = 42
)
Arguments
df A data frame with features and target.
target_feat String. Name of the numeric feature for the regression task.
benchmarking Positive integer. Number of base learners to stack. Default: "all".
super String. Super learner of choice among the available learners. Default: "avg".
algos String vector. Available learners are: "glmnet", "ranger", "xgboost", "rpart",
"kknn", "svm".
sampling_rate Positive numeric. Sampling rate before applying the stacked ensemble. Default:
1.
metric String. Evaluation metric for outer and inner cross-validation. Default: "mae".
collapse_char_to
Positive integer. Conversion of characters to factors with predefined maximum
number of levels. Default: 10.
num_preproc String. Options for scalar pre-processing: "scale" or "range". Default: "scale".
fct_preproc String. Options for factor pre-processing: "encodeimpact", "encodelmer", "one-
hot", "treatment", "poly", "sum", "helmert". Default: "one-hot".
impute_num String. Options for missing imputation in case of numeric: "sample" or "hist".
Default: "sample". For factor the default mode is Out-Of-Range.
missing_fusion String. Adding missing indicator features. Default: "FALSE".
inner String. Cross-validation inner cycle: "holdout", "cv", "repeated_cv", "subsam-
pling". Default: "holdout".
outer String. Cross-validation outer cycle: "holdout", "cv", "repeated_cv", "subsam-
pling". Default: "holdout".
folds Positive integer. Number of repetitions used in "cv" and "repeated_cv". Default:
3.
repeats Positive integer. Number of repetitions used in "subsampling" and "repeated_cv".
Default: 3.
ratio Positive numeric. Percentage value for "holdout" and "subsampling". Default:
0.5.
selected_filter
String. Filters available for regression tasks: "carscore", "cmim", "correlation",
"find_correlation", "information_gain", "relief", "variance". Default: "informa-
tion_gain".
selected_n_feats
Positive integer. Number of features to select through the chosen filter. Default:
NULL.
tuning String. Available options are "random_search" and "grid_search". Default:
"random_search".
budget Positive integer. Maximum number of trials during random search. Default: 30.
resolution Positive integer. Grid resolution for each hyper-parameter. Default: 5.
n_evals Positive integer. Number of evaluation for termination. Default: 30.
minute_time Positive integer. Maximum run time before termination. Default: 10.
patience Positive numeric. Percentage of stagnating evaluations before termination. De-
fault: 0.3.
min_improve Positive numeric. Minimum error improvement required before termination.
Default: 0.01.
java_mem Positive integer. Memory allocated to Java. Default: 64.
decimals Positive integer. Decimal format of prediction. Default: 2.
seed Positive integer. Default: 42.
Value
This function returns a list including:
• benchmark_error: comparison between the base learners
• resampled_model: mlr3 standard description of the analytic pipeline.
• plot: mlr3 standard graph of the analytic pipeline.
• selected_n_feats: selected features and score according to the filtering method used.
• model_error: error measure for outer cycle of cross-validation.
• testing_frame: data set used for calculating the test metrics.
• test_metrics: metrics reported are mse, rmse, mae, mape, mdae, rae, rse, rrse, smape.
• model_predict: prediction function to apply to new data on the same scheme.
• time_log: computation time.
Author(s)
<NAME> <<EMAIL>>
See Also
Useful links:
• https://mlr3.mlr-org.com/
Examples
## Not run:
sense(benchmark, "y", algos = c("glmnet", "rpart"))
## End(Not run) |
neonOS | cran | R | Package ‘neonOS’
October 13, 2022
Title Basic Data Wrangling for NEON Observational Data
Version 1.0.0
Date 2022-09-21
Description NEON observational data are provided via the NEON Data Portal <https:
//www.neonscience.org>
and NEON API, and can be downloaded and reformatted by the 'neonUtilities' pack-
age. NEON observational data
(human-observed measurements, and analyses derived from human-
collected samples, such as tree diameters and
algal chemistry) are published in a format consisting of one or more tabular data files. This pack-
age provides
tools for performing common operations on NEON observational data, including check-
ing for duplicates and
joining tables.
Depends R (>= 3.6)
Imports utils, data.table, httr, curl, jsonlite
Suggests testthat, neonUtilities
License AGPL-3
URL https://github.com/NEONScience/NEON-OS-data-processing
BugReports https://github.com/NEONScience/NEON-OS-data-processing/issues
Encoding UTF-8
LazyData true
RoxygenNote 7.2.1
NeedsCompilation no
Author <NAME> [aut, cre, ctb] (<https://orcid.org/0000-0001-8753-6593>),
<NAME> [aut, ctb] (<https://orcid.org/0000-0001-5923-0917>),
<NAME> [aut, ctb] (<https://orcid.org/0000-0002-1144-418X>),
NEON (National Ecological Observatory Network) [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-09-29 08:50:05 UTC
R topics documented:
brd_countdat... 2
brd_perpoin... 3
cfc_lignin_test_dup... 5
cfc_lignin_variable... 5
getAP... 6
getSampleTre... 7
getTaxonLis... 8
idSampleParent... 9
joinTableNEO... 10
removeDup... 11
brd_countdata Count data table from Breeding landbird point counts
(DP1.10003.001)
Description
An example set of NEON observational data. Contains the individual bird observations from Niwot
Ridge (NIWO) in 2019, as published in RELEASE-2021.
Usage
brd_countdata
Format
A data frame with 472 rows and 26 columns
uid Unique ID within NEON database; an identifier for the record
namedLocation Name of the measurement location in the NEON database
domainID Unique identifier of the NEON domain
siteID NEON site code
plotID Plot identifier (NEON site code_XXX)
plotType NEON plot type in which sampling occurred: tower, distributed or gradient
pointID Identifier for a point location
startDate The start date-time or interval during which an event occurred
eventID An identifier for the set of information associated with the event, which includes informa-
tion about the place and time of the event
pointCountMinute The minute of sampling within the point count period
targetTaxaPresent Indicator of whether the sample contained individuals of the target taxa
taxonID Species code, based on one or more sources
scientificName Scientific name, associated with the taxonID. This is the name of the lowest level
taxonomic rank that can be determined
taxonRank The lowest level taxonomic rank that can be determined for the individual or specimen
vernacularName A common or vernacular name
family The scientific name of the family in which the taxon is classified
nativeStatusCode The process by which the taxon became established in the location
observerDistance Radial distance between the observer and the individual(s) being observed
detectionMethod How the individual(s) was (were) first detected by the observer
visualConfirmation Whether the individual(s) was (were) seen after the initial detection
sexOrAge Sex of individual if detectable, age of individual if individual can not be sexed
clusterSize Number of individuals in a cluster (a group of individuals of the same species)
clusterCode Alphabetic code (A-Z) linked to clusters (groups of individuals of the same species)
spanning multiple records
identifiedBy An identifier for the technician who identified the specimen
publicationDate Date of data publication on the NEON data portal
release Identifier for data release
Source
https://data.neonscience.org/api/v0/products/DP1.10003.001
brd_perpoint Per-point data table from Breeding landbird point counts
(DP1.10003.001)
Description
An example set of NEON observational data. Contains the point metadata associated with bird
observations from Niwot Ridge (NIWO) in 2019, as published in RELEASE-2021.
Usage
brd_perpoint
Format
A data frame with 54 rows and 31 columns
uid Unique ID within NEON database; an identifier for the record
namedLocation Name of the measurement location in the NEON database
domainID Unique identifier of the NEON domain
siteID NEON site code
plotID Plot identifier (NEON site code_XXX)
plotType NEON plot type in which sampling occurred: tower, distributed or gradient
pointID Identifier for a point location
nlcdClass National Land Cover Database Vegetation Type Name
decimalLatitude The geographic latitude (in decimal degrees, WGS84) of the geographic center
of the reference area
decimalLongitude The geographic longitude (in decimal degrees, WGS84) of the geographic cen-
ter of the reference area
geodeticDatum Model used to measure horizontal position on the earth
coordinateUncertainty The horizontal distance (in meters) from the given decimalLatitude and
decimalLongitude describing the smallest circle containing the whole of the Location. Zero is
not a valid value for this term
elevation Elevation (in meters) above sea level
elevationUncertainty Uncertainty in elevation values (in meters)
startDate The start date-time or interval during which an event occurred
samplingImpracticalRemarks Technician notes; free text comments accompanying the sampling
impractical record
samplingImpractical Samples and/or measurements were not collected due to the indicated cir-
cumstance
eventID An identifier for the set of information associated with the event, which includes informa-
tion about the place and time of the event
startCloudCoverPercentage Observer estimate of percent cloud cover at start of sampling
endCloudCoverPercentage Observer estimate of percent cloud cover at end of sampling
startRH Relative humidity as measured by handheld weather meter at the start of sampling
endRH Relative humidity as measured by handheld weather meter at the end of sampling
observedHabitat Observer assessment of dominant habitat at the sampling point at sampling time
observedAirTemp The air temperature measured with a handheld weather meter
kmPerHourObservedWindSpeed The average wind speed measured with a handheld weather
meter, in kilometers per hour
laboratoryName Name of the laboratory or facility that is processing the sample
samplingProtocolVersion The NEON document number and version where detailed information
regarding the sampling method used is available; format NEON.DOC.######vX
remarks Technician notes; free text comments accompanying the record
measuredBy An identifier for the technician who measured or collected the data
publicationDate Date of data publication on the NEON data portal
release Identifier for data release
Source
https://data.neonscience.org/api/v0/products/DP1.10003.001
cfc_lignin_test_dups Lignin data table from Plant foliar traits (DP1.10026.001)
Description
An example set of NEON observational data, containing duplicate records. NOT APPROPRIATE
FOR ANALYTICAL USE. Contains foliar lignin data from Moab and Toolik in 2017, with artificial
duplicates introduced to demonstrate the removeDups() function.
Usage
cfc_lignin_test_dups
Format
A data frame with 26 rows and 25 columns
The variable names, descriptions, and units can be found in the cfc_lignin_variables table
Source
https://data.neonscience.org/api/v0/products/DP1.10026.001
cfc_lignin_variables Variables file, subset to lignin table, from Plant foliar traits
(DP1.10026.001)
Description
The foliar lignin table’s variables file from NEON observational data. Example to illustrate use of
removeDups().
Usage
cfc_lignin_variables
Format
A data frame with 26 rows and 8 columns
table The table name of the NEON data table
fieldName Field name within the table; corresponds to column names in cfc_lignin_test_dups
description Description for each field name
dataType Type of data for each field name
units Units for each field name
downloadPkg Is the field published in the basic or expanded data package?
pubFormat Publication formatting, e.g. date format or rounding
primaryKey Fields indicated by Y, when combined, should identify a unique record. Used by
removeDups() to identify duplicate records.
Source
https://data.neonscience.org/api/v0/products/DP1.10026.001
getAPI Get the data from API
Description
Accesses the API with options to use the user-specific API token generated within neon.datascience
user accounts.
Usage
getAPI(apiURL, token = NA_character_)
Arguments
apiURL The API endpoint URL
token User specific API token (generated within neon.datascience user accounts). Op-
tional.
Author(s)
<NAME> <<EMAIL>>
References
License: GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007
getSampleTree Find all relatives (parents, children, and outward) of a given sample.
Description
Find all samples in the sample tree of a given sample.
Usage
getSampleTree(
sampleNode,
idType = "tag",
sampleClass = NA_character_,
token = NA_character_
)
Arguments
sampleNode A NEON sample identifier. [character]
idType Is sampleNode a tag, barcode, or guid? Defaults to tag. [character]
sampleClass The NEON sampleClass of sampleNode. Required if sampleNode is a tag and
there are multiple valid classes. [character]
token User specific API token (generated within neon.datascience user accounts). Op-
tional. [character]
Details
Related NEON samples can be connected to each other in a parent-child hierarchy. Parents can
have one or many children, and children can have one or many parents. Sample hierarchies can be
simple or complex - for example, particulate mass samples (dust filters) have no parents or children,
whereas water chemistry samples can be subsampled for dissolved gas, isotope, and microbial mea-
surements. This function finds all ancestors and descendants of the focal sample (the sampleNode),
and all of their relatives, and so on recursively, to provide the entire hierarchy. See documentation
for each data product for more specific information.
Value
A table of sample identifiers, their classes, and their parent samples.
Author(s)
<NAME> <<EMAIL>>
References
License: GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007
Examples
# Find related samples for a soil nitrogen transformation sample
## Not run:
soil_samp <- getSampleTree(sampleNode="B00000123538", idType="barcode")
## End(Not run)
getTaxonList Get NEON taxon table
Description
This is a function to retrieve a taxon table from the NEON data portal for a given taxon type and
provide it in a tractable format.
Usage
getTaxonList(
taxonType = NA,
recordReturnLimit = NA,
stream = "true",
verbose = "false",
token = NA
)
Arguments
taxonType The taxonTypeCode to access. Must be one of ALGAE, BEETLE, BIRD, FISH,
HERPETOLOGY, MACROINVERTEBRATE, MOSQUITO, MOSQUITO_PATHOGENS,
SMALL_MAMMAL, PLANT, TICK [character]
recordReturnLimit
The number of items to limit the result set to. If NA (the default), will return
either the first 100 records, or all records in the table, depending on the value of
‘stream‘. Use ‘stream=’true’‘ to get all records. [integer]
stream True or false, obtain the results as a stream. Utilize for large requests. Note this
is lowercase true and false as character strings, not logical. [character]
verbose True or false, include all possible taxonomic parameters. Defaults to false, only
essential parameters. Note this is lowercase true and false as character strings,
not logical. [character]
token User specific API token (generated within neon.datascience.org user account)
[character]
Value
Data frame with selected NEON taxonomic data
Author(s)
<NAME> <<EMAIL>>
References
License: GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007
Examples
# taxonTypeCode must be one of
# ALGAE, BEETLE, BIRD, FISH,
# HERPETOLOGY, MACROINVERTEBRATE,
# MOSQUITO, MOSQUITO_PATHOGENS,
# SMALL_MAMMAL, PLANT, TICK
#################################
# get the first 4 fish taxa
taxa_table <- getTaxonList('FISH', recordReturnLimit = 4)
idSampleParents Reformat sample data to indicate the parents of a focal sample.
Description
Reformat table of sample identifiers to include parent sample identifiers. Used in getSampleTree().
Usage
idSampleParents(sampleUuid, token = NA_character_)
Arguments
sampleUuid A NEON sample UUID. [character]
token User specific API token (generated within neon.datascience user accounts). Op-
tional. [character]
Value
A table of sample identifiers, their classes, and their parent samples.
Author(s)
<NAME> <<EMAIL>>
References
License: GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007
joinTableNEON Join two data tables from NEON Observational System
Description
NEON observational data are published in multiple tables, usually corresponding to activities per-
formed in different times or places. This function uses the fields identified in NEON Quick Start
Guides to join tables containing related data.
Usage
joinTableNEON(
table1,
table2,
name1 = NA_character_,
name2 = NA_character_,
location.fields = NA
)
Arguments
table1 A data frame containing data from a NEON observational data table [data frame]
table2 A second data frame containing data from a NEON observational data table
[data frame]
name1 The name of the first table. Defaults to the object name of table1. [character]
name2 The name of the second table. Defaults to the object name of table2. [character]
location.fields
Should standard location fields be included in the list of linking variables, to
avoid duplicating those fields? For most data products, these fields are re-
dundant, but there are a few exceptions. This parameter defaults to NA, in
which case the Quick Start Guide is consulted. If QSG indicates location fields
shouldn’t be included, value is updated to FALSE, otherwise to TRUE. Enter
TRUE or FALSE to override QSG defaults. [logical]
Details
The "Table joining" section of NEON Quick Start Guides (QSGs) provides the field names of the
linking variables between related NEON data tables. This function uses the QSG information to
join tables. All tables are joined using a full join. If you need to remove duplicates as well as
joining, run removeDups() before running joinTableNEON(). Tables that don’t appear together in
QSG instructions can’t be joined here. Some tables may not be straightforwardly joinable, such
as tables of analytical standards run as unknowns. Theoretically, these data could be joined to
analytical results by a combination of laboratory and date, but in general, a table join is not the
best way to analyze this type of data. If a pair of tables is omitted from QSG instructions that you
expected to find, contact NEON.
Value
A single data frame created by joining table1 and table2 on the fields identified in the quick start
guide.
Author(s)
<NAME> <<EMAIL>>
References
License: GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007
Examples
# Join metadata from the point level to individual observations, for NEON bird data
all_bird <- joinTableNEON(table1=brd_perpoint, table2=brd_countdata)
removeDups Remove duplicates from a data table based on a provided primary key;
flag duplicates that can’t be removed.
Description
NEON observational data may contain duplicates; this function removes exact duplicates, attempts
to resolve non-exact duplicates, and flags duplicates that can’t be resolved.
Usage
removeDups(data, variables, table = NA_character_)
Arguments
data A data frame containing data from a NEON observational data table [data frame]
variables The NEON variables file containing metadata about the data table in question
[data frame]
table The name of the table. Must match one of the table names in ’variables’ [char-
acter]
Details
Duplicates are identified based on exact matches in the values of the primary key. For records with
identical keys, these steps are followed, in order: (1) If records are identical except for NA or empty
string values, the non-empty values are kept. (2) If records are identical except for uid, remarks,
and/or personnel (xxxxBy) fields, unique values are concatenated within each field, and the merged
version is kept. (3) For records that are identical following steps 1 and 2, one record is kept and
flagged with duplicateRecordQF=1. (4) Records that can’t be resolved by steps 1-3 are flagged
with duplicateRecordQF=2. Note that in a set of three or more duplicates, some records may be
resolveable and some may not; if two or more records are left after steps 1-3, all remaining records
are flagged with duplicateRecordQF=2.
Value
A modified data frame with resolveable duplicates removed and a flag field added and populated.
Author(s)
<NAME> <<EMAIL>>
References
License: GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007
Examples
# Resolve and flag duplicates in a test dataset of foliar lignin
lig_dup <- removeDups(data=cfc_lignin_test_dups,
variables=cfc_lignin_variables,
table="cfc_lignin") |
github.com/DATA-DOG/godog | go | Go | README
[¶](#section-readme)
---
[](https://travis-ci.org/DATA-DOG/godog)
[](https://godoc.org/github.com/DATA-DOG/godog)
[](https://codecov.io/github/DATA-DOG/godog)
### Godog

**The API is likely to change a few times before we reach 1.0.0**
Please read all the README, you may find it very useful. And do not forget to peek into the
[CHANGELOG](https://github.com/DATA-DOG/godog/blob/master/CHANGELOG.md)
from time to time.
Package godog is the official Cucumber BDD framework for Golang, it merges specification and test documentation into one cohesive whole. The author is a member of [cucumber team](https://github.com/cucumber).
The project is inspired by [behat](http://docs.behat.org/ "Behavior driven development framework for PHP") and [cucumber](https://cucumber.io/ "Behavior driven development framework") and is based on cucumber [gherkin3 parser](https://github.com/cucumber/gherkin-go "Gherkin3 parser for GO").
**Godog** does not intervene with the standard **go test** command behavior. You can leverage both frameworks to functionally test your application while maintaining all test related source code in **_test.go**
files.
**Godog** acts similar compared to **go test** command, by using go compiler and linker tool in order to produce test executable. Godog contexts need to be exported the same way as **Test** functions for go tests. Note, that if you use **godog** command tool, it will use `go`
executable to determine compiler and linker.
**Godog** ships gherkin parser dependency as a subpackage. This will ensure that it is always compatible with the installed version of godog.
So in general there are no vendor dependencies needed for installation.
The following about section was taken from
[cucumber](https://cucumber.io/) homepage.
#### About
###### A single source of truth
Cucumber merges specification and test documentation into one cohesive whole.
###### Living documentation
Because they're automatically tested by Cucumber, your specifications are always bang up-to-date.
###### Focus on the customer
Business and IT don't always understand each other. Cucumber's executable specifications encourage closer collaboration, helping teams keep the business goal in mind at all times.
###### Less rework
When automated testing is this much fun, teams can easily protect themselves from costly regressions.
#### Install
```
go get github.com/DATA-DOG/godog/cmd/godog
```
#### Example
The following example can be [found here](https://github.com/DATA-DOG/godog/blob/v0.7.13/examples/godogs).
##### Step 1
Given we create a new go package **$GOPATH/src/godogs**. From now on, this is our work directory `cd $GOPATH/src/godogs`.
Imagine we have a **godog cart** to serve godogs for lunch. First of all,
we describe our feature in plain text - `vim $GOPATH/src/godogs/features/godogs.feature`:
```
# file: $GOPATH/src/godogs/features/godogs.feature Feature: eat godogs
In order to be happy
As a hungry gopher
I need to be able to eat godogs
Scenario: Eat 5 out of 12
Given there are 12 godogs
When I eat 5
Then there should be 7 remaining
```
**NOTE:** same as **go test** godog respects package level isolation. All your step definitions should be in your tested package root directory. In this case - `$GOPATH/src/godogs`
##### Step 2
If godog is installed in your GOPATH. We can run `godog` inside the
**$GOPATH/src/godogs** directory. You should see that the steps are undefined:

If we wish to vendor godog dependency, we can do it as usual, using tools you prefer:
```
git clone https://github.com/DATA-DOG/godog.git $GOPATH/src/godogs/vendor/github.com/DATA-DOG/godog
```
It gives you undefined step snippets to implement in your test context.
You may copy these snippets into your `godogs_test.go` file.
Our directory structure should now look like:

If you copy the snippets into our test file and run godog again. We should see the step definition is now pending:

You may change **ErrPending** to **nil** and the scenario will pass successfully.
Since we need a working implementation, we may start by implementing only what is necessary.
##### Step 3
We only need a number of **godogs** for now. Lets keep it simple.
```
/* file: $GOPATH/src/godogs/godogs.go */
package main
// Godogs available to eat var Godogs int
func main() { /* usual main func */ }
```
##### Step 4
Now lets implement our step definitions, which we can copy from generated console output snippets in order to test our feature requirements:
```
/* file: $GOPATH/src/godogs/godogs_test.go */
package main
import (
"fmt"
"github.com/DATA-DOG/godog"
)
func thereAreGodogs(available int) error {
Godogs = available
return nil
}
func iEat(num int) error {
if Godogs < num {
return fmt.Errorf("you cannot eat %d godogs, there are %d available", num, Godogs)
}
Godogs -= num
return nil
}
func thereShouldBeRemaining(remaining int) error {
if Godogs != remaining {
return fmt.Errorf("expected %d godogs to be remaining, but there is %d", remaining, Godogs)
}
return nil
}
func FeatureContext(s *godog.Suite) {
s.Step(`^there are (\d+) godogs$`, thereAreGodogs)
s.Step(`^I eat (\d+)$`, iEat)
s.Step(`^there should be (\d+) remaining$`, thereShouldBeRemaining)
s.BeforeScenario(func(interface{}) {
Godogs = 0 // clean the state before every scenario
})
}
```
Now when you run the `godog` again, you should see:

We have hooked to **BeforeScenario** event in order to reset application state before each scenario. You may hook into more events, like
**AfterStep** to print all state in case of an error. Or
**BeforeSuite** to prepare a database.
By now, you should have figured out, how to use **godog**. Another advice is to make steps orthogonal, small and simple to read for an user. Whether the user is a dumb website user or an API developer, who may understand a little more technical context - it should target that user.
When steps are orthogonal and small, you can combine them just like you do with Unix tools. Look how to simplify or remove ones, which can be composed.
##### References and Tutorials
* [cucumber-html-reporter](https://github.com/gkushang/cucumber-html-reporter)
may be used in order to generate **html** reports together with
**cucumber** output formatter. See the [following docker image](https://github.com/myie/cucumber-html-reporter) for usage details.
* [how to use godog by semaphoreci](https://semaphoreci.com/community/tutorials/how-to-use-godog-for-behavior-driven-development-in-go)
* see [examples](https://github.com/DATA-DOG/godog/tree/master/examples)
* see extension [AssistDog](https://github.com/hellomd/assistdog), which may have useful **gherkin.DataTable** transformations or comparison methods for assertions.
##### Documentation
See [godoc](http://godoc.org/github.com/DATA-DOG/godog "Documentation on godoc") for general API details.
See **.travis.yml** for supported **go** versions.
See `godog -h` for general command options.
See implementation examples:
* [rest API server](https://github.com/DATA-DOG/godog/blob/v0.7.13/examples/api)
* [rest API with Database](https://github.com/DATA-DOG/godog/blob/v0.7.13/examples/db)
* [godogs](https://github.com/DATA-DOG/godog/blob/v0.7.13/examples/godogs)
#### FAQ
##### Running Godog with go test
You may integrate running **godog** in your **go test** command. You can run it using go [TestMain](https://golang.org/pkg/testing/#hdr-Main) func available since **go 1.4**. In this case it is not necessary to have
**godog** command installed. See the following examples.
The following example binds **godog** flags with specified prefix `godog`
in order to prevent flag collisions.
```
var opt = godog.Options{
Output: colors.Colored(os.Stdout),
Format: "progress", // can define default values
}
func init() {
godog.BindFlags("godog.", flag.CommandLine, &opt)
}
func TestMain(m *testing.M) {
flag.Parse()
opt.Paths = flag.Args()
status := godog.RunWithOptions("godogs", func(s *godog.Suite) {
FeatureContext(s)
}, opt)
if st := m.Run(); st > status {
status = st
}
os.Exit(status)
}
```
Then you may run tests with by specifying flags in order to filter features.
```
go test -v --godog.random --godog.tags=wip go test -v --godog.format=pretty --godog.random -race -coverprofile=coverage.txt -covermode=atomic
```
The following example does not bind godog flags, instead manually configuring needed options.
```
func TestMain(m *testing.M) {
status := godog.RunWithOptions("godog", func(s *godog.Suite) {
FeatureContext(s)
}, godog.Options{
Format: "progress",
Paths: []string{"features"},
Randomize: time.Now().UTC().UnixNano(), // randomize scenario execution order
})
if st := m.Run(); st > status {
status = st
}
os.Exit(status)
}
```
You can even go one step further and reuse **go test** flags, like
**verbose** mode in order to switch godog **format**. See the following example:
```
func TestMain(m *testing.M) {
format := "progress"
for _, arg := range os.Args[1:] {
if arg == "-test.v=true" { // go test transforms -v option
format = "pretty"
break
}
}
status := godog.RunWithOptions("godog", func(s *godog.Suite) {
godog.SuiteContext(s)
}, godog.Options{
Format: format,
Paths: []string{"features"},
})
if st := m.Run(); st > status {
status = st
}
os.Exit(status)
}
```
Now when running `go test -v` it will use **pretty** format.
##### Configure common options for godog CLI
There are no global options or configuration files. Alias your common or project based commands: `alias godog-wip="godog --format=progress --tags=@wip"`
##### Testing browser interactions
**godog** does not come with builtin packages to connect to the browser.
You may want to look at [selenium](http://www.seleniumhq.org/) and probably [phantomjs](http://phantomjs.org/). See also the following components:
1. [browsersteps](https://github.com/llonchj/browsersteps) - provides basic context steps to start selenium and navigate browser content.
2. You may wish to have [goquery](https://github.com/PuerkitoBio/goquery)
in order to work with HTML responses like with JQuery.
##### Concurrency
In order to support concurrency well, you should reset the state and isolate each scenario. They should not share any state. It is suggested to run the suite concurrently in order to make sure there is no state corruption or race conditions in the application.
It is also useful to randomize the order of scenario execution, which you can now do with **--random** command option.
**NOTE:** if suite runs with concurrency option, it concurrently runs every feature, not scenario per different features. This gives a flexibility to isolate state per feature. For example using
**BeforeFeature** hook, it is possible to spin up costly service and shut it down only in **AfterFeature** hook and share the service between all scenarios in that feature. It is not advisable though, because you are risking having a state dependency.
#### Contributions
Feel free to open a pull request. Note, if you wish to contribute an extension to public (exported methods or types) -
please open an issue before to discuss whether these changes can be accepted. All backward incompatible changes are and will be treated cautiously.
#### License
**Godog** is licensed under the [three clause BSD license](http://en.wikipedia.org/wiki/BSD_licenses "The three clause BSD license")
**Gherkin** is licensed under the [MIT](https://en.wikipedia.org/wiki/MIT_License "The MIT license") and developed as a part of the [cucumber project](https://cucumber.io/ "Behavior driven development framework")
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package godog is the official Cucumber BDD framework for Golang, it merges specification and test documentation into one cohesive whole.
Godog does not intervene with the standard "go test" command and it's behavior.
You can leverage both frameworks to functionally test your application while maintaining all test related source code in *_test.go files.
Godog acts similar compared to go test command. It uses go compiler and linker tool in order to produce test executable. Godog contexts needs to be exported same as Test functions for go test.
For example, imagine you’re about to create the famous UNIX ls command.
Before you begin, you describe how the feature should work, see the example below..
Example:
```
Feature: ls
In order to see the directory structure
As a UNIX user
I need to be able to list the current directory's contents
Scenario:
Given I am in a directory "test"
And I have a file named "foo"
And I have a file named "bar"
When I run ls
Then I should get output:
"""
bar
foo
"""
```
Now, wouldn’t it be cool if something could read this sentence and use it to actually run a test against the ls command? Hey, that’s exactly what this package does!
As you’ll see, Godog is easy to learn, quick to use, and will put the fun back into tests.
Godog was inspired by Behat and Cucumber the above description is taken from it's documentation.
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [func AvailableFormatters() map[string]string](#AvailableFormatters)
* [func BindFlags(prefix string, set *flag.FlagSet, opt *Options)](#BindFlags)
* [func Build(bin string) error](#Build)
* [func FlagSet(opt *Options) *flag.FlagSet](#FlagSet)
* [func Format(name, description string, f FormatterFunc)](#Format)
* [func Run(suite string, contextInitializer func(suite *Suite)) int](#Run)
* [func RunWithOptions(suite string, contextInitializer func(suite *Suite), opt Options) int](#RunWithOptions)
* [func SuiteContext(s *Suite, additionalContextInitializers ...func(suite *Suite))](#SuiteContext)
* [type Formatter](#Formatter)
* [type FormatterFunc](#FormatterFunc)
* + [func FindFmt(name string) FormatterFunc](#FindFmt)
* [type Options](#Options)
* [type StepDef](#StepDef)
* [type Steps](#Steps)
* [type Suite](#Suite)
* + [func (s *Suite) AfterFeature(fn func(*gherkin.Feature))](#Suite.AfterFeature)
+ [func (s *Suite) AfterScenario(fn func(interface{}, error))](#Suite.AfterScenario)
+ [func (s *Suite) AfterStep(fn func(*gherkin.Step, error))](#Suite.AfterStep)
+ [func (s *Suite) AfterSuite(fn func())](#Suite.AfterSuite)
+ [func (s *Suite) BeforeFeature(fn func(*gherkin.Feature))](#Suite.BeforeFeature)
+ [func (s *Suite) BeforeScenario(fn func(interface{}))](#Suite.BeforeScenario)
+ [func (s *Suite) BeforeStep(fn func(*gherkin.Step))](#Suite.BeforeStep)
+ [func (s *Suite) BeforeSuite(fn func())](#Suite.BeforeSuite)
+ [func (s *Suite) Step(expr interface{}, stepFunc interface{})](#Suite.Step)
### Constants [¶](#pkg-constants)
```
const Version = "v0.7.13"
```
Version of package - based on Semantic Versioning 2.0.0 <http://semver.org/### Variables [¶](#pkg-variables)
```
var ErrPending = [fmt](/fmt).[Errorf](/fmt#Errorf)("step implementation is pending")
```
ErrPending should be returned by step definition if step implementation is pending
```
var ErrUndefined = [fmt](/fmt).[Errorf](/fmt#Errorf)("step is undefined")
```
ErrUndefined is returned in case if step definition was not found
### Functions [¶](#pkg-functions)
####
func [AvailableFormatters](https://github.com/DATA-DOG/godog/blob/v0.7.13/fmt.go#L83) [¶](#AvailableFormatters)
added in v0.6.0
```
func AvailableFormatters() map[[string](/builtin#string)][string](/builtin#string)
```
AvailableFormatters gives a map of all formatters registered with their name as key and description as value
####
func [BindFlags](https://github.com/DATA-DOG/godog/blob/v0.7.13/flags.go#L47) [¶](#BindFlags)
added in v0.7.7
```
func BindFlags(prefix [string](/builtin#string), set *[flag](/flag).[FlagSet](/flag#FlagSet), opt *[Options](#Options))
```
BindFlags binds godog flags to given flag set prefixed by given prefix, without overriding usage
####
func [Build](https://github.com/DATA-DOG/godog/blob/v0.7.13/builder_go110.go#L81) [¶](#Build)
```
func Build(bin [string](/builtin#string)) [error](/builtin#error)
```
Build creates a test package like go test command at given target path.
If there are no go files in tested directory, then it simply builds a godog executable to scan features.
If there are go test files, it first builds a test package with standard go test command.
Finally it generates godog suite executable which registers exported godog contexts from the test files of tested package.
Returns the path to generated executable
####
func [FlagSet](https://github.com/DATA-DOG/godog/blob/v0.7.13/flags.go#L38) [¶](#FlagSet)
added in v0.4.3
```
func FlagSet(opt *[Options](#Options)) *[flag](/flag).[FlagSet](/flag#FlagSet)
```
FlagSet allows to manage flags by external suite runner builds flag.FlagSet with godog flags binded
####
func [Format](https://github.com/DATA-DOG/godog/blob/v0.7.13/fmt.go#L72) [¶](#Format)
added in v0.2.1
```
func Format(name, description [string](/builtin#string), f [FormatterFunc](#FormatterFunc))
```
Format registers a feature suite output formatter by given name, description and FormatterFunc constructor function, to initialize formatter with the output recorder.
####
func [Run](https://github.com/DATA-DOG/godog/blob/v0.7.13/run.go#L217) [¶](#Run)
added in v0.2.1
```
func Run(suite [string](/builtin#string), contextInitializer func(suite *[Suite](#Suite))) [int](/builtin#int)
```
Run creates and runs the feature suite.
Reads all configuration options from flags.
uses contextInitializer to register contexts
the concurrency option allows runner to initialize a number of suites to be run separately. Only progress formatter is supported when concurrency level is higher than 1
contextInitializer must be able to register the step definitions and event handlers.
The exit codes may vary from:
```
0 - success 1 - failed 2 - command line usage error 128 - or higher, os signal related error exit codes
```
If there are flag related errors they will be directed to os.Stderr
####
func [RunWithOptions](https://github.com/DATA-DOG/godog/blob/v0.7.13/run.go#L98) [¶](#RunWithOptions)
added in v0.6.0
```
func RunWithOptions(suite [string](/builtin#string), contextInitializer func(suite *[Suite](#Suite)), opt [Options](#Options)) [int](/builtin#int)
```
RunWithOptions is same as Run function, except it uses Options provided in order to run the test suite without parsing flags
This method is useful in case if you run godog in for example TestMain function together with go tests
The exit codes may vary from:
```
0 - success 1 - failed 2 - command line usage error 128 - or higher, os signal related error exit codes
```
If there are flag related errors they will be directed to os.Stderr
####
func [SuiteContext](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite_context.go#L25) [¶](#SuiteContext)
added in v0.7.3
```
func SuiteContext(s *[Suite](#Suite), additionalContextInitializers ...func(suite *[Suite](#Suite)))
```
SuiteContext provides steps for godog suite execution and can be used for meta-testing of godog features/steps themselves.
Beware, steps or their definitions might change without backward compatibility guarantees. A typical user of the godog library should never need this, rather it is provided for those developing add-on libraries for godog.
For an example of how to use, see godog's own `features/` and `suite_test.go`.
### Types [¶](#pkg-types)
####
type [Formatter](https://github.com/DATA-DOG/godog/blob/v0.7.13/fmt.go#L98) [¶](#Formatter)
```
type Formatter interface {
Feature(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Feature](/github.com/DATA-DOG/[email protected]/gherkin#Feature), [string](/builtin#string), [][byte](/builtin#byte))
Node(interface{})
Defined(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Step](/github.com/DATA-DOG/[email protected]/gherkin#Step), *[StepDef](#StepDef))
Failed(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Step](/github.com/DATA-DOG/[email protected]/gherkin#Step), *[StepDef](#StepDef), [error](/builtin#error))
Passed(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Step](/github.com/DATA-DOG/[email protected]/gherkin#Step), *[StepDef](#StepDef))
Skipped(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Step](/github.com/DATA-DOG/[email protected]/gherkin#Step), *[StepDef](#StepDef))
Undefined(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Step](/github.com/DATA-DOG/[email protected]/gherkin#Step), *[StepDef](#StepDef))
Pending(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Step](/github.com/DATA-DOG/[email protected]/gherkin#Step), *[StepDef](#StepDef))
Summary()
}
```
Formatter is an interface for feature runner output summary presentation.
New formatters may be created to represent suite results in different ways. These new formatters needs to be registered with a godog.Format function call
####
type [FormatterFunc](https://github.com/DATA-DOG/godog/blob/v0.7.13/fmt.go#L112) [¶](#FormatterFunc)
added in v0.6.0
```
type FormatterFunc func([string](/builtin#string), [io](/io).[Writer](/io#Writer)) [Formatter](#Formatter)
```
FormatterFunc builds a formatter with given suite name and io.Writer to record output
####
func [FindFmt](https://github.com/DATA-DOG/godog/blob/v0.7.13/fmt.go#L59) [¶](#FindFmt)
added in v0.7.7
```
func FindFmt(name [string](/builtin#string)) [FormatterFunc](#FormatterFunc)
```
FindFmt searches available formatters registered and returns FormaterFunc matched by given format name or nil otherwise
####
type [Options](https://github.com/DATA-DOG/godog/blob/v0.7.13/options.go#L14) [¶](#Options)
added in v0.6.0
```
type Options struct {
// Print step definitions found and exit
ShowStepDefinitions [bool](/builtin#bool)
// Randomize, if not `0`, will be used to run scenarios in a random order.
//
// Randomizing scenario order is especially helpful for detecting
// situations where you have state leaking between scenarios, which can
// cause flickering or fragile tests.
//
// The default value of `0` means "do not randomize".
//
// The magic value of `-1` means "pick a random seed for me", and godog will
// assign a seed on it's own during the `RunWithOptions` phase, similar to if
// you specified `--random` on the command line.
//
// Any other value will be used as the random seed for shuffling. Re-using the
// same seed will allow you to reproduce the shuffle order of a previous run
// to isolate an error condition.
Randomize [int64](/builtin#int64)
// Stops on the first failure
StopOnFailure [bool](/builtin#bool)
// Fail suite when there are pending or undefined steps
Strict [bool](/builtin#bool)
// Forces ansi color stripping
NoColors [bool](/builtin#bool)
// Various filters for scenarios parsed
// from feature files
Tags [string](/builtin#string)
// The formatter name
Format [string](/builtin#string)
// Concurrency rate, not all formatters accepts this
Concurrency [int](/builtin#int)
// All feature file paths
Paths [][string](/builtin#string)
// Where it should print formatter output
Output [io](/io).[Writer](/io#Writer)
}
```
Options are suite run options flags are mapped to these options.
It can also be used together with godog.RunWithOptions to run test suite from go source directly
See the flags for more details
####
type [StepDef](https://github.com/DATA-DOG/godog/blob/v0.7.13/stepdef.go#L42) [¶](#StepDef)
```
type StepDef struct {
Expr *[regexp](/regexp).[Regexp](/regexp#Regexp)
Handler interface{}
// contains filtered or unexported fields
}
```
StepDef is a registered step definition contains a StepHandler and regexp which is used to match a step. Args which were matched by last executed step
This structure is passed to the formatter when step is matched and is either failed or successful
####
type [Steps](https://github.com/DATA-DOG/godog/blob/v0.7.13/stepdef.go#L32) [¶](#Steps)
added in v0.7.0
```
type Steps [][string](/builtin#string)
```
Steps allows to nest steps instead of returning an error in step func it is possible to return combined steps:
```
func multistep(name string) godog.Steps {
return godog.Steps{
fmt.Sprintf(`an user named "%s"`, name),
fmt.Sprintf(`user "%s" is authenticated`, name),
}
}
```
These steps will be matched and executed in sequential order. The first one which fails will result in main step failure.
####
type [Suite](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite.go#L49) [¶](#Suite)
```
type Suite struct {
// contains filtered or unexported fields
}
```
Suite allows various contexts to register steps and event handlers.
When running a test suite, the instance of Suite is passed to all functions (contexts), which have it as a first and only argument.
Note that all event hooks does not catch panic errors in order to have a trace information. Only step executions are catching panic error since it may be a context specific error.
####
func (*Suite) [AfterFeature](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite.go#L203) [¶](#Suite.AfterFeature)
added in v0.7.4
```
func (s *[Suite](#Suite)) AfterFeature(fn func(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Feature](/github.com/DATA-DOG/[email protected]/gherkin#Feature)))
```
AfterFeature registers a function or method to be run once after feature executed all scenarios.
####
func (*Suite) [AfterScenario](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite.go#L197) [¶](#Suite.AfterScenario)
```
func (s *[Suite](#Suite)) AfterScenario(fn func(interface{}, [error](/builtin#error)))
```
AfterScenario registers an function or method to be run after every scenario or scenario outline
The interface argument may be *gherkin.Scenario or *gherkin.ScenarioOutline
####
func (*Suite) [AfterStep](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite.go#L188) [¶](#Suite.AfterStep)
```
func (s *[Suite](#Suite)) AfterStep(fn func(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Step](/github.com/DATA-DOG/[email protected]/gherkin#Step), [error](/builtin#error)))
```
AfterStep registers an function or method to be run after every scenario
It may be convenient to return a different kind of error in order to print more state details which may help in case of step failure
In some cases, for example when running a headless browser, to take a screenshot after failure.
####
func (*Suite) [AfterSuite](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite.go#L209) [¶](#Suite.AfterSuite)
```
func (s *[Suite](#Suite)) AfterSuite(fn func())
```
AfterSuite registers a function or method to be run once after suite runner
####
func (*Suite) [BeforeFeature](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite.go#L156) [¶](#Suite.BeforeFeature)
added in v0.7.4
```
func (s *[Suite](#Suite)) BeforeFeature(fn func(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Feature](/github.com/DATA-DOG/[email protected]/gherkin#Feature)))
```
BeforeFeature registers a function or method to be run once before every feature execution.
If godog is run with concurrency option, it will run every feature per goroutine. So user may choose whether to isolate state within feature context or scenario.
Best practice is not to have any state dependency on every scenario, but in some cases if VM for example needs to be started it may take very long for each scenario to restart it.
Use it wisely and avoid sharing state between scenarios.
####
func (*Suite) [BeforeScenario](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite.go#L169) [¶](#Suite.BeforeScenario)
```
func (s *[Suite](#Suite)) BeforeScenario(fn func(interface{}))
```
BeforeScenario registers a function or method to be run before every scenario or scenario outline.
The interface argument may be *gherkin.Scenario or *gherkin.ScenarioOutline
It is a good practice to restore the default state before every scenario so it would be isolated from any kind of state.
####
func (*Suite) [BeforeStep](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite.go#L175) [¶](#Suite.BeforeStep)
```
func (s *[Suite](#Suite)) BeforeStep(fn func(*[gherkin](/github.com/DATA-DOG/[email protected]/gherkin).[Step](/github.com/DATA-DOG/[email protected]/gherkin#Step)))
```
BeforeStep registers a function or method to be run before every scenario
####
func (*Suite) [BeforeSuite](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite.go#L138) [¶](#Suite.BeforeSuite)
```
func (s *[Suite](#Suite)) BeforeSuite(fn func())
```
BeforeSuite registers a function or method to be run once before suite runner.
Use it to prepare the test suite for a spin.
Connect and prepare database for instance...
####
func (*Suite) [Step](https://github.com/DATA-DOG/godog/blob/v0.7.13/suite.go#L85) [¶](#Suite.Step)
```
func (s *[Suite](#Suite)) Step(expr interface{}, stepFunc interface{})
```
Step allows to register a *StepDef in Godog feature suite, the definition will be applied to all steps matching the given Regexp expr.
It will panic if expr is not a valid regular expression or stepFunc is not a valid step handler.
Note that if there are two definitions which may match the same step, then only the first matched handler will be applied.
If none of the *StepDef is matched, then ErrUndefined error will be returned when running steps. |
haxe | devdocs | Groovy |
Haxe API documentation
======================
Haxe is an open source toolkit based on a modern, high level, strictly typed programming language, a cross-compiler, a complete cross-platform standard library and ways to access each platform's native capabilities.
Getting Started With Haxe
-------------------------
* Take a look at our [introduction](https://haxe.org/documentation/introduction/)
* Read through the [Haxe Manual](https://haxe.org/manual/)
* Look at these [use cases for Haxe](https://haxe.org/use-cases/)
* Find and install [popular Haxe libraries](https://lib.haxe.org/t/all/)
* Learn by example with the [Haxe Code Cookbook](http://code.haxe.org)
Top Level
---------
| |
| --- |
| [cpp](https://api.haxe.org/cpp/index.html ".cpp") |
| [cs](https://api.haxe.org/cs/index.html ".cs") |
| [eval](https://api.haxe.org/eval/index.html ".eval") |
| [flash](https://api.haxe.org/flash/index.html ".flash") |
| [haxe](haxe/index ".haxe") |
| [hl](https://api.haxe.org/hl/index.html ".hl") |
| [java](https://api.haxe.org/java/index.html ".java") |
| [js](https://api.haxe.org/js/index.html ".js") |
| [lua](https://api.haxe.org/lua/index.html ".lua") |
| [mbedtls](mbedtls/index ".mbedtls") |
| [neko](https://api.haxe.org/neko/index.html ".neko") |
| [php](https://api.haxe.org/php/index.html ".php") |
| [python](https://api.haxe.org/python/index.html ".python") |
| [sys](https://api.haxe.org/sys/index.html ".sys") |
| [Any](any ".Any") | `[Any](any)` is a type that is compatible with any other in both ways. |
| [Array](array ".Array") | |
| [ArrayAccess](arrayaccess ".ArrayAccess") | `[ArrayAccess](arrayaccess)` is used to indicate a class that can be accessed using brackets. The type parameter represents the type of the elements stored. |
| [Bool](bool ".Bool") | The standard Boolean type, which can either be `[true](bool)` or `[false](bool)`. |
| [Class](class ".Class") | An abstract type that represents a Class. |
| [Date](date ".Date") | The Date class provides a basic structure for date and time related information. Date instances can be created by |
| [DateTools](datetools ".DateTools") | The DateTools class contains some extra functionalities for handling `[Date](date)` instances and timestamps. |
| [Dynamic](dynamic ".Dynamic") | `[Dynamic](dynamic)` is a special type which is compatible with all other types. |
| [EReg](ereg ".EReg") | The EReg class represents regular expressions. |
| [Enum](enum ".Enum") | An abstract type that represents an Enum type. |
| [EnumValue](enumvalue ".EnumValue") | An abstract type that represents any enum value. See `[Type](type)` for the Haxe Reflection API. |
| [Float](float ".Float") | The standard `[Float](float)` type, this is a double-precision IEEE 64bit float. |
| [Int](int ".Int") | The standard `[Int](int)` type. Its precision depends on the platform. |
| [IntIterator](intiterator ".IntIterator") | IntIterator is used for implementing interval iterations. |
| [Iterable](iterable ".Iterable") | An `[Iterable](iterable)` is a data structure which has an `iterator()` method. See `[Lambda](lambda)` for generic functions on iterable structures. |
| [Iterator](iterator ".Iterator") | An `[Iterator](iterator)` is a structure that permits iteration over elements of type `T`. |
| [KeyValueIterable](keyvalueiterable ".KeyValueIterable") | A `[KeyValueIterable](keyvalueiterable)` is a data structure which has a `keyValueIterator()` method to iterate over key-value-pairs. |
| [KeyValueIterator](keyvalueiterator ".KeyValueIterator") | A `[KeyValueIterator](keyvalueiterator)` is an `[Iterator](iterator)` that has a key and a value. |
| [Lambda](lambda ".Lambda") | The `[Lambda](lambda)` class is a collection of methods to support functional programming. It is ideally used with `using [Lambda](lambda)` and then acts as an extension to Iterable types. |
| [List](list ".List") | |
| [Map](map ".Map") | |
| [Math](math ".Math") | This class defines mathematical functions and constants. |
| [Null](null ".Null") | `[Null](null)<T>` is a wrapper that can be used to make the basic types `[Int](int)`, `[Float](float)` and `[Bool](bool)` nullable on static targets. |
| [Reflect](reflect ".Reflect") | The Reflect API is a way to manipulate values dynamically through an abstract interface in an untyped manner. Use with care. |
| [Single](single ".Single") | Single-precision IEEE 32bit float (4-byte). |
| [Std](std ".Std") | The Std class provides standard methods for manipulating basic types. |
| [String](string ".String") | The basic String class. |
| [StringBuf](stringbuf ".StringBuf") | A String buffer is an efficient way to build a big string by appending small elements together. |
| [StringTools](stringtools ".StringTools") | This class provides advanced methods on Strings. It is ideally used with `using [StringTools](stringtools)` and then acts as an [extension](https://haxe.org/manual/lf-static-extension.html) to the `[String](string)` class. |
| [Sys](https://api.haxe.org/Sys.html ".Sys") | This class provides access to various base functions of system platforms. Look in the `sys` package for more system APIs. |
| [SysError](https://api.haxe.org/SysError.html ".SysError") | |
| [Type](type ".Type") | The Haxe Reflection API allows retrieval of type information at runtime. |
| [UInt](uint ".UInt") | The unsigned `[Int](int)` type is only defined for Flash and C#. Simulate it for other platforms. |
| [UnicodeString](unicodestring ".UnicodeString") | This abstract provides consistent cross-target unicode support. |
| [ValueType](valuetype ".ValueType") | The different possible runtime types of a value. |
| [Void](void ".Void") | The standard `[Void](void)` type. Only `null` values can be of the type `[Void](void)`. |
| [Xml](xml ".Xml") | Cross-platform Xml API. |
| [XmlType](xmltype ".XmlType") | Xml node types. |
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Any([Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."))
========================================================================================================
[no package](index)
*Available on all platforms*
`[Any](any)` is a type that is compatible with any other in both ways.
This means that a value of any type can be assigned to `[Any](any)`, and vice-versa, a value of `[Any](any)` type can be assigned to any other type.
It's a more type-safe alternative to `[Dynamic](dynamic)`, because it doesn't support field access or operators and it's bound to monomorphs. So, to work with the actual value, it needs to be explicitly promoted to another type.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Any.htmlArrayAccess<T>
===============
[no package](index)
extended by [IList_1](https://api.haxe.org/cs/system/collections/generic/IList_1.html "cs.system.collections.generic.IList_1"), [IDictionary](https://api.haxe.org/cs/system/collections/IDictionary.html "cs.system.collections.IDictionary"), [IList](https://api.haxe.org/cs/system/collections/IList.html "cs.system.collections.IList")
*Available on all platforms*
`[ArrayAccess](arrayaccess)` is used to indicate a class that can be accessed using brackets. The type parameter represents the type of the elements stored.
This interface should be used for externs only. Haxe does not support custom array access on classes. However, array access can be implemented for abstract types.
See also:
* <https://haxe.org/manual/types-abstract-array-access.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/ArrayAccess.htmlBool
=====
[no package](index)
*Available on all platforms*
The standard Boolean type, which can either be `[true](bool)` or `[false](bool)`.
On static targets, `null` cannot be assigned to `[Bool](bool)`. If this is necessary, `[Null](null)<[Bool](bool)>` can be used instead.
See also:
* <https://haxe.org/manual/types-bool.html>
* <https://haxe.org/manual/types-nullability.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Bool.htmlClass<T>
=========
[no package](index)
*Available on all platforms*
An abstract type that represents a Class.
See `[Type](type)` for the Haxe Reflection API.
See also:
* <https://haxe.org/manual/types-class-instance.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Class.htmlDateTools
==========
[no package](index)
*Available on all platforms*
The DateTools class contains some extra functionalities for handling `[Date](date)` instances and timestamps.
In the context of Haxe dates, a timestamp is defined as the number of milliseconds elapsed since 1st January 1970.
Static methods
--------------
### `staticinline[days](#days)(n:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Converts a number of days to a timestamp.
### `staticinline[delta](#delta)(d:[Date](date "Date - The Date class provides a basic structure for date and time related information."), t:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Date](date "Date - The Date class provides a basic structure for date and time related information.")`
Returns the result of adding timestamp `t` to Date `d`.
This is a convenience function for calling `[Date.fromTime](date#fromTime)(d.getTime() + t)`.
### `static[format](#format)(d:[Date](date "Date - The Date class provides a basic structure for date and time related information."), f:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Format the date `d` according to the format `f`. The format is compatible with the `strftime` standard format, except that there is no support in Flash and JS for day and months names (due to lack of proper internationalization API). On Haxe/Neko/Windows, some formats are not supported.
```
var t = DateTools.format(Date.now(), "%Y-%m-%d_%H:%M:%S");
// 2016-07-08_14:44:05
var t = DateTools.format(Date.now(), "%r");
// 02:44:05 PM
var t = DateTools.format(Date.now(), "%T");
// 14:44:05
var t = DateTools.format(Date.now(), "%F");
// 2016-07-08
```
### `static[getMonthDays](#getMonthDays)(d:[Date](date "Date - The Date class provides a basic structure for date and time related information.")):[Int](int "Int - The standard Int type.")`
Returns the number of days in the month of Date `d`.
This method handles leap years.
### `staticinline[hours](#hours)(n:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Converts a number of hours to a timestamp.
### `static[make](#make)(o:{seconds:[Int](int "Int - The standard Int type."), ms:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float."), minutes:[Int](int "Int - The standard Int type."), hours:[Int](int "Int - The standard Int type."), days:[Int](int "Int - The standard Int type.")}):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Build a date-time from several components
### `static[makeUtc](#makeUtc)(year:[Int](int "Int - The standard Int type."), month:[Int](int "Int - The standard Int type."), day:[Int](int "Int - The standard Int type."), hour:[Int](int "Int - The standard Int type."), min:[Int](int "Int - The standard Int type."), sec:[Int](int "Int - The standard Int type.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
*Available on php, js, cpp, python, flash*
Retrieve Unix timestamp value from Date components. Takes same argument sequence as the Date constructor.
### `staticinline[minutes](#minutes)(n:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Converts a number of minutes to a timestamp.
### `static[parse](#parse)(t:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):{seconds:[Int](int "Int - The standard Int type."), ms:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float."), minutes:[Int](int "Int - The standard Int type."), hours:[Int](int "Int - The standard Int type."), days:[Int](int "Int - The standard Int type.")}`
Separate a date-time into several components
### `staticinline[seconds](#seconds)(n:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Converts a number of seconds to a timestamp.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/DateTools.htmlInt
====
[no package](index)
to [Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")
*Available on all platforms*
The standard `[Int](int)` type. Its precision depends on the platform.
On static targets, `null` cannot be assigned to `[Int](int)`. If this is necessary, `[Null](null)<[Int](int)>` can be used instead.
`[Std.int](std#int)` converts a `[Float](float)` to an `[Int](int)`, rounded towards 0. `[Std.parseInt](std#parseInt)` converts a `[String](string)` to an `[Int](int)`.
See also:
* <https://haxe.org/manual/types-basic-types.html>
* <https://haxe.org/manual/std-math-integer-math.html>
* <https://haxe.org/manual/types-nullability.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Int.htmlIterable<T>
============
[no package](index)
*Available on all platforms*
An `[Iterable](iterable)` is a data structure which has an `iterator()` method. See `[Lambda](lambda)` for generic functions on iterable structures.
See also:
* <https://haxe.org/manual/lf-iterators.htmlFields
------
### `[iterator](#iterator)():[Iterator](iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<T>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Iterable.htmlArray<T>
=========
[no package](index)
extended by [RegExpMatch](https://api.haxe.org/js/lib/RegExpMatch.html "js.lib.RegExpMatch - A return value of the RegExp.")
*Available on all platforms*
Constructor
-----------
### `[new](#new)()`
Creates a new Array.
Variables
---------
### `read only[length](#length):[Int](int "Int - The standard Int type.")`
The length of `this` Array.
Methods
-------
### `[concat](#concat)(a:[Array](array "Array")<T>):[Array](array "Array")<T>`
Returns a new Array by appending the elements of `a` to the elements of `this` Array.
This operation does not modify `this` Array.
If `a` is the empty Array `[]`, a copy of `this` Array is returned.
The length of the returned Array is equal to the sum of `this.[length](#length)` and `a.length`.
If `a` is `null`, the result is unspecified.
### `[contains](#contains)(x:T):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns whether `this` Array contains `x`.
If `x` is found by checking standard equality, the function returns `[true](bool)`, otherwise the function returns `[false](bool)`.
### `[copy](#copy)():[Array](array "Array")<T>`
Returns a shallow copy of `this` Array.
The elements are not copied and retain their identity, so `a[i] == a.copy()[i]` is true for any valid `i`. However, `a == a.copy()` is always false.
### `inline[filter](#filter)(f:T ‑> [Bool](bool "Bool - The standard Boolean type, which can either be true or false.")):[Array](array "Array")<T>`
Returns an Array containing those elements of `this` for which `f` returned true.
The individual elements are not duplicated and retain their identity.
If `f` is null, the result is unspecified.
### `[indexOf](#indexOf)(x:T, ?fromIndex:[Int](int "Int - The standard Int type.")):[Int](int "Int - The standard Int type.")`
Returns position of the first occurrence of `x` in `this` Array, searching front to back.
If `x` is found by checking standard equality, the function returns its index.
If `x` is not found, the function returns -1.
If `fromIndex` is specified, it will be used as the starting index to search from, otherwise search starts with zero index. If it is negative, it will be taken as the offset from the end of `this` Array to compute the starting index. If given or computed starting index is less than 0, the whole array will be searched, if it is greater than or equal to the length of `this` Array, the function returns -1.
### `[insert](#insert)(pos:[Int](int "Int - The standard Int type."), x:T):[Void](void "Void - The standard Void type.")`
Inserts the element `x` at the position `pos`.
This operation modifies `this` Array in place.
The offset is calculated like so:
* If `pos` exceeds `this.[length](#length)`, the offset is `this.[length](#length)`.
* If `pos` is negative, the offset is calculated from the end of `this` Array, i.e. `this.[length](#length) + pos`. If this yields a negative value, the offset is 0.
* Otherwise, the offset is `pos`.
If the resulting offset does not exceed `this.[length](#length)`, all elements from and including that offset to the end of `this` Array are moved one index ahead.
### `inline[iterator](#iterator)():[ArrayIterator](haxe/iterators/arrayiterator "haxe.iterators.ArrayIterator")<T>`
Returns an iterator of the Array values.
### `[join](#join)(sep:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Returns a string representation of `this` Array, with `sep` separating each element.
### The result of this operation is equal to `[Std.string](std#string)(this[0]) + sep + [Std.string](std#string)(this[1]) + sep + ... + sep + [Std.string](std#string)(this[this.[length](#length)-1])`
If `this` is the empty Array `[]`, the result is the empty String `""`. If `this` has exactly one element, the result is equal to a call to `[Std.string](std#string)(this[0])`.
If `sep` is null, the result is unspecified.
### `inline[keyValueIterator](#keyValueIterator)():[ArrayKeyValueIterator](haxe/iterators/arraykeyvalueiterator "haxe.iterators.ArrayKeyValueIterator")<T>`
Returns an iterator of the Array indices and values.
### `[lastIndexOf](#lastIndexOf)(x:T, ?fromIndex:[Int](int "Int - The standard Int type.")):[Int](int "Int - The standard Int type.")`
Returns position of the last occurrence of `x` in `this` Array, searching back to front.
If `x` is found by checking standard equality, the function returns its index.
If `x` is not found, the function returns -1.
If `fromIndex` is specified, it will be used as the starting index to search from, otherwise search starts with the last element index. If it is negative, it will be taken as the offset from the end of `this` Array to compute the starting index. If given or computed starting index is greater than or equal to the length of `this` Array, the whole array will be searched, if it is less than 0, the function returns -1.
### `inline[map](#map)<S>(f:T ‑> S):[Array](array "Array")<S>`
Creates a new Array by applying function `f` to all elements of `this`.
The order of elements is preserved.
If `f` is null, the result is unspecified.
### `[pop](#pop)():[Null](null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
Removes the last element of `this` Array and returns it.
This operation modifies `this` Array in place.
If `this` has at least one element, `this.[length](#length)` will decrease by 1.
If `this` is the empty Array `[]`, null is returned and the length remains 0.
### `[push](#push)(x:T):[Int](int "Int - The standard Int type.")`
Adds the element `x` at the end of `this` Array and returns the new length of `this` Array.
This operation modifies `this` Array in place.
`this.[length](#length)` increases by 1.
### `[remove](#remove)(x:T):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Removes the first occurrence of `x` in `this` Array.
This operation modifies `this` Array in place.
If `x` is found by checking standard equality, it is removed from `this` Array and all following elements are reindexed accordingly. The function then returns true.
If `x` is not found, `this` Array is not changed and the function returns false.
### `[resize](#resize)(len:[Int](int "Int - The standard Int type.")):[Void](void "Void - The standard Void type.")`
Set the length of the Array.
If `len` is shorter than the array's current size, the last `length - len` elements will be removed. If `len` is longer, the Array will be extended, with new elements set to a target-specific default value:
* always null on dynamic targets
* 0, 0.0 or false for Int, Float and Bool respectively on static targets
* null for other types on static targets
### `[reverse](#reverse)():[Void](void "Void - The standard Void type.")`
Reverse the order of elements of `this` Array.
This operation modifies `this` Array in place.
If `this.[length](#length) < 2`, `this` remains unchanged.
### `[shift](#shift)():[Null](null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
Removes the first element of `this` Array and returns it.
This operation modifies `this` Array in place.
If `this` has at least one element, `this`.length and the index of each remaining element is decreased by 1.
If `this` is the empty Array `[]`, `null` is returned and the length remains 0.
### `[slice](#slice)(pos:[Int](int "Int - The standard Int type."), ?end:[Int](int "Int - The standard Int type.")):[Array](array "Array")<T>`
Creates a shallow copy of the range of `this` Array, starting at and including `pos`, up to but not including `end`.
This operation does not modify `this` Array.
The elements are not copied and retain their identity.
If `end` is omitted or exceeds `this.[length](#length)`, it defaults to the end of `this` Array.
If `pos` or `end` are negative, their offsets are calculated from the end of `this` Array by `this.[length](#length) + pos` and `this.[length](#length) + end` respectively. If this yields a negative value, 0 is used instead.
If `pos` exceeds `this.[length](#length)` or if `end` is less than or equals `pos`, the result is `[]`.
### `[sort](#sort)(f:(T, T) ‑> [Int](int "Int - The standard Int type.")):[Void](void "Void - The standard Void type.")`
Sorts `this` Array according to the comparison function `f`, where `f(x,y)` returns 0 if x == y, a positive Int if x > y and a negative Int if x < y.
This operation modifies `this` Array in place.
The sort operation is not guaranteed to be stable, which means that the order of equal elements may not be retained. For a stable Array sorting algorithm, `[haxe.ds.ArraySort.sort](haxe/ds/arraysort#sort)()` can be used instead.
If `f` is null, the result is unspecified.
### `[splice](#splice)(pos:[Int](int "Int - The standard Int type."), len:[Int](int "Int - The standard Int type.")):[Array](array "Array")<T>`
Removes `len` elements from `this` Array, starting at and including `pos`, an returns them.
This operation modifies `this` Array in place.
If `len` is < 0 or `pos` exceeds `this`.length, an empty Array [] is returned and `this` Array is unchanged.
If `pos` is negative, its value is calculated from the end of `this` Array by `this.[length](#length) + pos`. If this yields a negative value, 0 is used instead.
If the sum of the resulting values for `len` and `pos` exceed `this.[length](#length)`, this operation will affect the elements from `pos` to the end of `this` Array.
The length of the returned Array is equal to the new length of `this` Array subtracted from the original length of `this` Array. In other words, each element of the original `this` Array either remains in `this` Array or becomes an element of the returned Array.
### `[toString](#toString)():[String](string "String - The basic String class.")`
Returns a string representation of `this` Array.
The result will include the individual elements' String representations separated by comma. The enclosing [ ] may be missing on some platforms, use `[Std.string](std#string)()` to get a String representation that is consistent across platforms.
### `[unshift](#unshift)(x:T):[Void](void "Void - The standard Void type.")`
Adds the element `x` at the start of `this` Array.
This operation modifies `this` Array in place.
`this.[length](#length)` and the index of each Array element increases by 1.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Array.htmlDate
=====
[no package](index)
*Available on all platforms*
The Date class provides a basic structure for date and time related information. Date instances can be created by
* `new [Date](date)()` for a specific date,
* `[Date.now](date#now)()` to obtain information about the current time,
* `[Date.fromTime](date#fromTime)()` with a given timestamp or
* `[Date.fromString](date#fromString)()` by parsing from a String.
There are some extra functions available in the `[DateTools](datetools)` class.
In the context of Haxe dates, a timestamp is defined as the number of milliseconds elapsed since 1st January 1970 UTC.
Supported range
---------------
Due to platform limitations, only dates in the range 1970 through 2038 are supported consistently. Some targets may support dates outside this range, depending on the OS at runtime. The `[Date.fromTime](date#fromTime)` method will not work with timestamps outside the range on any target.
Static methods
--------------
### `static[fromString](#fromString)(s:[String](string "String - The basic String class.")):[Date](date "Date - The Date class provides a basic structure for date and time related information.")`
Creates a Date from the formatted string `s`. The following formats are accepted by the function:
* `"YYYY-MM-DD hh:mm:ss"`
* `"YYYY-MM-DD"`
* `"hh:mm:ss"`
The first two formats expressed a date in local time. The third is a time relative to the UTC epoch.
### `static[fromTime](#fromTime)(t:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Date](date "Date - The Date class provides a basic structure for date and time related information.")`
Creates a Date from the timestamp (in milliseconds) `t`.
### `static[now](#now)():[Date](date "Date - The Date class provides a basic structure for date and time related information.")`
Returns a Date representing the current local time.
Constructor
-----------
### `[new](#new)(year:[Int](int "Int - The standard Int type."), month:[Int](int "Int - The standard Int type."), day:[Int](int "Int - The standard Int type."), hour:[Int](int "Int - The standard Int type."), min:[Int](int "Int - The standard Int type."), sec:[Int](int "Int - The standard Int type."))`
Creates a new date object from the given arguments.
The behaviour of a Date instance is only consistent across platforms if the the arguments describe a valid date.
* month: 0 to 11 (note that this is zero-based)
* day: 1 to 31
* hour: 0 to 23
* min: 0 to 59
* sec: 0 to 59
Methods
-------
### `[getDate](#getDate)():[Int](int "Int - The standard Int type.")`
Returns the day of `this` Date (1-31 range) in the local timezone.
### `[getDay](#getDay)():[Int](int "Int - The standard Int type.")`
Returns the day of the week of `this` Date (0-6 range, where `0` is Sunday) in the local timezone.
### `[getFullYear](#getFullYear)():[Int](int "Int - The standard Int type.")`
Returns the full year of `this` Date (4 digits) in the local timezone.
### `[getHours](#getHours)():[Int](int "Int - The standard Int type.")`
Returns the hours of `this` Date (0-23 range) in the local timezone.
### `[getMinutes](#getMinutes)():[Int](int "Int - The standard Int type.")`
Returns the minutes of `this` Date (0-59 range) in the local timezone.
### `[getMonth](#getMonth)():[Int](int "Int - The standard Int type.")`
Returns the month of `this` Date (0-11 range) in the local timezone. Note that the month number is zero-based.
### `[getSeconds](#getSeconds)():[Int](int "Int - The standard Int type.")`
Returns the seconds of `this` Date (0-59 range) in the local timezone.
### `[getTime](#getTime)():[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the timestamp (in milliseconds) of `this` date. On cpp and neko, this function only has a second resolution, so the result will always be a multiple of `1000.0`, e.g. `1454698271000.0`. To obtain the current timestamp with better precision on cpp and neko, see the `[Sys.time](https://api.haxe.org/Sys.html#time)` API.
For measuring time differences with millisecond accuracy on all platforms, see `[haxe.Timer.stamp](haxe/timer#stamp)`.
### `[getTimezoneOffset](#getTimezoneOffset)():[Int](int "Int - The standard Int type.")`
Returns the time zone difference of `this` Date in the current locale to UTC, in minutes.
Assuming the function is executed on a machine in a UTC+2 timezone, `[Date.now](date#now)().getTimezoneOffset()` will return `-120`.
### `[getUTCDate](#getUTCDate)():[Int](int "Int - The standard Int type.")`
Returns the day of `this` Date (1-31 range) in UTC.
### `[getUTCDay](#getUTCDay)():[Int](int "Int - The standard Int type.")`
Returns the day of the week of `this` Date (0-6 range, where `0` is Sunday) in UTC.
### `[getUTCFullYear](#getUTCFullYear)():[Int](int "Int - The standard Int type.")`
Returns the full year of `this` Date (4 digits) in UTC.
### `[getUTCHours](#getUTCHours)():[Int](int "Int - The standard Int type.")`
Returns the hours of `this` Date (0-23 range) in UTC.
### `[getUTCMinutes](#getUTCMinutes)():[Int](int "Int - The standard Int type.")`
Returns the minutes of `this` Date (0-59 range) in UTC.
### `[getUTCMonth](#getUTCMonth)():[Int](int "Int - The standard Int type.")`
Returns the month of `this` Date (0-11 range) in UTC. Note that the month number is zero-based.
### `[getUTCSeconds](#getUTCSeconds)():[Int](int "Int - The standard Int type.")`
Returns the seconds of `this` Date (0-59 range) in UTC.
### `[toString](#toString)():[String](string "String - The basic String class.")`
Returns a string representation of `this` Date in the local timezone using the standard format `YYYY-MM-DD HH:MM:SS`. See `[DateTools.format](datetools#format)` for other formatting rules.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Date.htmlDynamic<T>
===========
[no package](index)
*Available on all platforms*
`[Dynamic](dynamic)` is a special type which is compatible with all other types.
Use of `[Dynamic](dynamic)` should be minimized as it prevents several compiler checks and optimizations. See `[Any](any)` type for a safer alternative for representing values of any type.
See also:
* <https://haxe.org/manual/types-dynamic.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Dynamic.htmlEnum<T>
========
[no package](index)
*Available on all platforms*
An abstract type that represents an Enum type.
The corresponding enum instance type is `[EnumValue](enumvalue)`.
See `[Type](type)` for the Haxe Reflection API.
See also:
* <https://haxe.org/manual/types-enum-instance.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Enum.htmlEnumValue
==========
[no package](index)
*Available on all platforms*
An abstract type that represents any enum value. See `[Type](type)` for the Haxe Reflection API.
See also:
* <https://haxe.org/manual/types-enum-instance.htmlMethods
-------
### `[match](#match)(pattern:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Matches enum instance `e` against pattern `pattern`, returning `[true](bool)` if matching succeeded and `[false](bool)` otherwise.
Example usage:
```
if (e.match(pattern)) {
// codeIfTrue
} else {
// codeIfFalse
}
```
This is equivalent to the following code:
```
switch (e) {
case pattern:
// codeIfTrue
case _:
// codeIfFalse
}
```
This method is implemented in the compiler. This definition exists only for documentation.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/EnumValue.htmlType
=====
[no package](index)
*Available on all platforms*
The Haxe Reflection API allows retrieval of type information at runtime.
This class complements the more lightweight Reflect class, with a focus on class and enum instances.
See also:
* <https://haxe.org/manual/types.html>
* <https://haxe.org/manual/std-reflection.htmlStatic methods
--------------
### `static[allEnums](#allEnums)<T>(e:[Enum](enum "Enum - An abstract type that represents an Enum type.")<T>):[Array](array "Array")<T>`
Returns a list of all constructors of enum `e` that require no arguments.
This may return the empty Array `[]` if all constructors of `e` require arguments.
Otherwise an instance of `e` constructed through each of its non- argument constructors is returned, in the order of the constructor declaration.
If `e` is null, the result is unspecified.
### `static[createEmptyInstance](#createEmptyInstance)<T>(cl:[Class](class "Class - An abstract type that represents a Class.")<T>):T`
Creates an instance of class `cl`.
This function guarantees that the class constructor is not called.
If `cl` is null, the result is unspecified.
### `static[createEnum](#createEnum)<T>(e:[Enum](enum "Enum - An abstract type that represents an Enum type.")<T>, constr:[String](string "String - The basic String class."), ?params:[Array](array "Array")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):T`
Creates an instance of enum `e` by calling its constructor `constr` with arguments `params`.
If `e` or `constr` is null, or if enum `e` has no constructor named `constr`, or if the number of elements in `params` does not match the expected number of constructor arguments, or if any argument has an invalid type, the result is unspecified.
### `static[createEnumIndex](#createEnumIndex)<T>(e:[Enum](enum "Enum - An abstract type that represents an Enum type.")<T>, index:[Int](int "Int - The standard Int type."), ?params:[Array](array "Array")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):T`
Creates an instance of enum `e` by calling its constructor number `index` with arguments `params`.
The constructor indices are preserved from Haxe syntax, so the first declared is index 0, the next index 1 etc.
If `e` or `constr` is null, or if enum `e` has no constructor named `constr`, or if the number of elements in `params` does not match the expected number of constructor arguments, or if any argument has an invalid type, the result is unspecified.
### `static[createInstance](#createInstance)<T>(cl:[Class](class "Class - An abstract type that represents a Class.")<T>, args:[Array](array "Array")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):T`
Creates an instance of class `cl`, using `args` as arguments to the class constructor.
This function guarantees that the class constructor is called.
Default values of constructors arguments are not guaranteed to be taken into account.
If `cl` or `args` are null, or if the number of elements in `args` does not match the expected number of constructor arguments, or if any argument has an invalid type, or if `cl` has no own constructor, the result is unspecified.
In particular, default values of constructor arguments are not guaranteed to be taken into account.
### `static[enumConstructor](#enumConstructor)(e:[EnumValue](enumvalue "EnumValue - An abstract type that represents any enum value.")):[String](string "String - The basic String class.")`
Returns the constructor name of enum instance `e`.
The result String does not contain any constructor arguments.
If `e` is null, the result is unspecified.
### `static[enumEq](#enumEq)<T>(a:T, b:T):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Recursively compares two enum instances `a` and `b` by value.
Unlike `a == b`, this function performs a deep equality check on the arguments of the constructors, if exists.
If `a` or `b` are null, the result is unspecified.
### `static[enumIndex](#enumIndex)(e:[EnumValue](enumvalue "EnumValue - An abstract type that represents any enum value.")):[Int](int "Int - The standard Int type.")`
Returns the index of enum instance `e`.
This corresponds to the original syntactic position of `e`. The index of the first declared constructor is 0, the next one is 1 etc.
If `e` is null, the result is unspecified.
### `static[enumParameters](#enumParameters)(e:[EnumValue](enumvalue "EnumValue - An abstract type that represents any enum value.")):[Array](array "Array")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
Returns a list of the constructor arguments of enum instance `e`.
If `e` has no arguments, the result is [].
Otherwise the result are the values that were used as arguments to `e`, in the order of their declaration.
If `e` is null, the result is unspecified.
### `static[getClass](#getClass)<T>(o:T):[Class](class "Class - An abstract type that represents a Class.")<T>`
Returns the class of `o`, if `o` is a class instance.
If `o` is null or of a different type, null is returned.
In general, type parameter information cannot be obtained at runtime.
### `static[getClassFields](#getClassFields)(c:[Class](class "Class - An abstract type that represents a Class.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):[Array](array "Array")<[String](string "String - The basic String class.")>`
Returns a list of static fields of class `c`.
This does not include static fields of parent classes.
The order of the fields in the returned Array is unspecified.
If `c` is null, the result is unspecified.
### `static[getClassName](#getClassName)(c:[Class](class "Class - An abstract type that represents a Class.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):[String](string "String - The basic String class.")`
Returns the name of class `c`, including its path.
If `c` is inside a package, the package structure is returned dot- separated, with another dot separating the class name: `pack1.pack2.(...).packN.ClassName` If `c` is a sub-type of a Haxe module, that module is not part of the package structure.
If `c` has no package, the class name is returned.
If `c` is null, the result is unspecified.
The class name does not include any type parameters.
### `static[getEnum](#getEnum)(o:[EnumValue](enumvalue "EnumValue - An abstract type that represents any enum value.")):[Enum](enum "Enum - An abstract type that represents an Enum type.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
Returns the enum of enum instance `o`.
An enum instance is the result of using an enum constructor. Given an `enum Color { Red; }`, `getEnum(Red)` returns `[Enum](enum)<Color>`.
If `o` is null, null is returned.
In general, type parameter information cannot be obtained at runtime.
### `static[getEnumConstructs](#getEnumConstructs)(e:[Enum](enum "Enum - An abstract type that represents an Enum type.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):[Array](array "Array")<[String](string "String - The basic String class.")>`
Returns a list of the names of all constructors of enum `e`.
The order of the constructor names in the returned Array is preserved from the original syntax.
If `e` is null, the result is unspecified.
### `static[getEnumName](#getEnumName)(e:[Enum](enum "Enum - An abstract type that represents an Enum type.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):[String](string "String - The basic String class.")`
Returns the name of enum `e`, including its path.
If `e` is inside a package, the package structure is returned dot- separated, with another dot separating the enum name: `pack1.pack2.(...).packN.EnumName` If `e` is a sub-type of a Haxe module, that module is not part of the package structure.
If `e` has no package, the enum name is returned.
If `e` is null, the result is unspecified.
The enum name does not include any type parameters.
### `static[getInstanceFields](#getInstanceFields)(c:[Class](class "Class - An abstract type that represents a Class.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):[Array](array "Array")<[String](string "String - The basic String class.")>`
Returns a list of the instance fields of class `c`, including inherited fields.
This only includes fields which are known at compile-time. In particular, using `getInstanceFields(getClass(obj))` will not include any fields which were added to `obj` at runtime.
The order of the fields in the returned Array is unspecified.
If `c` is null, the result is unspecified.
### `static[getSuperClass](#getSuperClass)(c:[Class](class "Class - An abstract type that represents a Class.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):[Class](class "Class - An abstract type that represents a Class.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
Returns the super-class of class `c`.
If `c` has no super class, null is returned.
If `c` is null, the result is unspecified.
In general, type parameter information cannot be obtained at runtime.
### `static[resolveClass](#resolveClass)(name:[String](string "String - The basic String class.")):[Class](class "Class - An abstract type that represents a Class.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
Resolves a class by name.
If `name` is the path of an existing class, that class is returned.
Otherwise null is returned.
If `name` is null or the path to a different type, the result is unspecified.
The class name must not include any type parameters.
### `static[resolveEnum](#resolveEnum)(name:[String](string "String - The basic String class.")):[Enum](enum "Enum - An abstract type that represents an Enum type.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
Resolves an enum by name.
If `name` is the path of an existing enum, that enum is returned.
Otherwise null is returned.
If `name` is null the result is unspecified.
If `name` is the path to a different type, null is returned.
The enum name must not include any type parameters.
### `static[typeof](#typeof)(v:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[ValueType](valuetype "ValueType - The different possible runtime types of a value.")`
Returns the runtime type of value `v`.
The result corresponds to the type `v` has at runtime, which may vary per platform. Assumptions regarding this should be minimized to avoid surprises.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Type.htmlFloat
======
[no package](index)
*Available on all platforms*
The standard `[Float](float)` type, this is a double-precision IEEE 64bit float.
On static targets, `null` cannot be assigned to Float. If this is necessary, `[Null](null)<[Float](float)>` can be used instead.
`[Std.int](std#int)` converts a `[Float](float)` to an `[Int](int)`, rounded towards 0. `[Std.parseFloat](std#parseFloat)` converts a `[String](string)` to a `[Float](float)`.
See also:
* <https://haxe.org/manual/types-basic-types.html>
* <https://haxe.org/manual/types-nullability.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Float.htmlIntIterator
============
[no package](index)
*Available on all platforms*
IntIterator is used for implementing interval iterations.
It is usually not used explicitly, but through its special syntax: `min...max`
While it is possible to assign an instance of IntIterator to a variable or field, it is worth noting that IntIterator does not reset after being used in a for-loop. Subsequent uses of the same instance will then have no effect.
See also:
* <https://haxe.org/manual/lf-iterators.htmlConstructor
-----------
### `inline[new](#new)(min:[Int](int "Int - The standard Int type."), max:[Int](int "Int - The standard Int type."))`
Iterates from `min` (inclusive) to `max` (exclusive).
If `max <= min`, the iterator will not act as a countdown.
Methods
-------
### `inline[hasNext](#hasNext)():[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns true if the iterator has other items, false otherwise.
### `inline[next](#next)():[Int](int "Int - The standard Int type.")`
Moves to the next item of the iterator.
If this is called while hasNext() is false, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/IntIterator.htmlLambda
=======
[no package](index)
*Available on all platforms*
The `[Lambda](lambda)` class is a collection of methods to support functional programming. It is ideally used with `using [Lambda](lambda)` and then acts as an extension to Iterable types.
On static platforms, working with the Iterable structure might be slower than performing the operations directly on known types, such as Array and List.
If the first argument to any of the methods is null, the result is unspecified.
See also:
* <https://haxe.org/manual/std-Lambda.htmlStatic methods
--------------
### `static[array](#array)<A>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>):[Array](array "Array")<A>`
Creates an Array from Iterable `it`.
If `it` is an Array, this function returns a copy of it.
### `static[concat](#concat)<T>(a:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<T>, b:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<T>):[Array](array "Array")<T>`
Returns a new Array containing all elements of Iterable `a` followed by all elements of Iterable `b`.
If `a` or `b` are null, the result is unspecified.
### `static[count](#count)<A>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, ?pred:(item:A) ‑> [Bool](bool "Bool - The standard Boolean type, which can either be true or false.")):[Int](int "Int - The standard Int type.")`
Returns the number of elements in `it` for which `pred` is true, or the total number of elements in `it` if `pred` is null.
This function traverses all elements.
### `static[empty](#empty)<T>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<T>):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if Iterable `it` does not contain any element.
### `static[exists](#exists)<A>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, f:(item:A) ‑> [Bool](bool "Bool - The standard Boolean type, which can either be true or false.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `it` contains an element for which `f` is true.
This function returns true as soon as an element is found for which a call to `f` returns true.
If no such element is found, the result is false.
If `f` is null, the result is unspecified.
### `static[filter](#filter)<A>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, f:(item:A) ‑> [Bool](bool "Bool - The standard Boolean type, which can either be true or false.")):[Array](array "Array")<A>`
Returns a Array containing those elements of `it` for which `f` returned true. If `it` is empty, the result is the empty Array even if `f` is null. Otherwise if `f` is null, the result is unspecified.
### `static[find](#find)<T>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<T>, f:(item:T) ‑> [Bool](bool "Bool - The standard Boolean type, which can either be true or false.")):[Null](null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
Returns the first element of `it` for which `f` is true.
This function returns as soon as an element is found for which a call to `f` returns true.
If no such element is found, the result is null.
If `f` is null, the result is unspecified.
### `static[findIndex](#findIndex)<T>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<T>, f:(item:T) ‑> [Bool](bool "Bool - The standard Boolean type, which can either be true or false.")):[Int](int "Int - The standard Int type.")`
Returns the index of the first element of `it` for which `f` is true.
This function returns as soon as an element is found for which a call to `f` returns true.
If no such element is found, the result is -1.
If `f` is null, the result is unspecified.
### `staticinline[flatMap](#flatMap)<A, B>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, f:(item:A) ‑> [Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<B>):[Array](array "Array")<B>`
A composition of map and flatten. The order of elements is preserved. If `f` is null, the result is unspecified.
### `staticinline[flatten](#flatten)<A>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>>):[Array](array "Array")<A>`
Concatenate a list of iterables. The order of elements is preserved.
### `static[fold](#fold)<A, B>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, f:(item:A, result:B) ‑> B, first:B):B`
Functional fold on Iterable `it`, using function `f` with start argument `first`.
If `it` has no elements, the result is `first`.
Otherwise the first element of `it` is passed to `f` alongside `first`. The result of that call is then passed to `f` with the next element of `it`, and so on until `it` has no more elements.
If `it` or `f` are null, the result is unspecified.
### `static[foldi](#foldi)<A, B>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, f:(item:A, result:B, index:[Int](int "Int - The standard Int type.")) ‑> B, first:B):B`
Similar to fold, but also passes the index of each element to `f`.
If `it` or `f` are null, the result is unspecified.
### `static[foreach](#foreach)<A>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, f:(item:A) ‑> [Bool](bool "Bool - The standard Boolean type, which can either be true or false.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `f` is true for all elements of `it`.
This function returns false as soon as an element is found for which a call to `f` returns false.
If no such element is found, the result is true.
In particular, this function always returns true if `it` is empty.
If `f` is null, the result is unspecified.
### `static[has](#has)<A>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, elt:A):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `it` contains `elt`.
This function returns true as soon as an element is found which is equal to `elt` according to the `==` operator.
If no such element is found, the result is false.
### `static[indexOf](#indexOf)<T>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<T>, v:T):[Int](int "Int - The standard Int type.")`
Returns the index of the first element `v` within Iterable `it`.
This function uses operator `==` to check for equality.
If `v` does not exist in `it`, the result is -1.
### `static[iter](#iter)<A>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, f:(item:A) ‑> [Void](void "Void - The standard Void type.")):[Void](void "Void - The standard Void type.")`
Calls `f` on all elements of `it`, in order.
If `f` is null, the result is unspecified.
### `static[list](#list)<A>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>):[List](haxe/ds/list "haxe.ds.List - A linked-list of elements.")<A>`
Creates a List form Iterable `it`.
If `it` is a List, this function returns a copy of it.
### `staticinline[map](#map)<A, B>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, f:(item:A) ‑> B):[Array](array "Array")<B>`
Creates a new Array by applying function `f` to all elements of `it`. The order of elements is preserved. If `f` is null, the result is unspecified.
### `staticinline[mapi](#mapi)<A, B>(it:[Iterable](iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<A>, f:(index:[Int](int "Int - The standard Int type."), item:A) ‑> B):[Array](array "Array")<B>`
Similar to map, but also passes the index of each element to `f`. The order of elements is preserved. If `f` is null, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Lambda.htmlIterator<T>
============
[no package](index)
*Available on all platforms*
An `[Iterator](iterator)` is a structure that permits iteration over elements of type `T`.
Any class with matching `hasNext()` and `next()` fields is considered an `[Iterator](iterator)` and can then be used e.g. in `for`-loops. This makes it easy to implement custom iterators.
See also:
* <https://haxe.org/manual/lf-iterators.htmlFields
------
### `[next](#next)():T`
Returns the current item of the `Iterator` and advances to the next one. This method is not required to check `hasNext()` first. A call to this method while `hasNext()` is `false` yields unspecified behavior. On the other hand, iterators should not require a call to `hasNext()` before the first call to `next()` if an element is available.### `[hasNext](#hasNext)():[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns `false` if the iteration is complete, `true` otherwise. Usually iteration is considered to be complete if all elements of the underlying data structure were handled through calls to `next()`. However, in custom iterators any logic may be used to determine the completion state.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Iterator.htmlKeyValueIterable<K, V>
=======================
[no package](index)
*Available on all platforms*
A `[KeyValueIterable](keyvalueiterable)` is a data structure which has a `keyValueIterator()` method to iterate over key-value-pairs.
Fields
------
### `[keyValueIterator](#keyValueIterator)():[KeyValueIterator](keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<K, V>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/KeyValueIterable.htmlEReg
=====
[no package](index)
*Available on all platforms*
The EReg class represents regular expressions.
While basic usage and patterns consistently work across platforms, some more complex operations may yield different results. This is a necessary trade- off to retain a certain level of performance.
EReg instances can be created by calling the constructor, or with the special syntax `~/pattern/modifier`
EReg instances maintain an internal state, which is affected by several of its methods.
A detailed explanation of the supported operations is available at <https://haxe.org/manual/std-regex.htmlStatic methods
--------------
### `static[escape](#escape)(s:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Escape the string `s` for use as a part of regular expression.
If `s` is null, the result is unspecified.
Constructor
-----------
### `[new](#new)(r:[String](string "String - The basic String class."), opt:[String](string "String - The basic String class."))`
Creates a new regular expression with pattern `r` and modifiers `opt`.
### This is equivalent to the shorthand syntax `~/r/opt`
If `r` or `opt` are null, the result is unspecified.
Methods
-------
### `[map](#map)(s:[String](string "String - The basic String class."), f:[EReg](ereg "EReg - The EReg class represents regular expressions.") ‑> [String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Calls the function `f` for the substring of `s` which `this` EReg matches and replaces that substring with the result of `f` call.
The `f` function takes `this` EReg object as its first argument and should return a replacement string for the substring matched.
If `this` EReg does not match any substring, the result is `s`.
By default, this method replaces only the first matched substring. If the global g modifier is in place, all matched substrings are replaced.
If `s` or `f` are null, the result is unspecified.
### `[match](#match)(s:[String](string "String - The basic String class.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `this` regular expression matches String `s`.
This method modifies the internal state.
If `s` is `null`, the result is unspecified.
### `[matchSub](#matchSub)(s:[String](string "String - The basic String class."), pos:[Int](int "Int - The standard Int type."), len:[Int](int "Int - The standard Int type.") = -1):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `this` regular expression matches a substring of String `s`.
This function expects `pos` and `len` to describe a valid substring of `s`, or else the result is unspecified. To get more robust behavior, `this.[match](#match)(s.substr(pos,len))` can be used instead.
This method modifies the internal state.
If `s` is null, the result is unspecified.
### `[matched](#matched)(n:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
Returns the matched sub-group `n` of `this` EReg.
This method should only be called after `this.[match](#match)` or `this.[matchSub](#matchSub)`, and then operates on the String of that operation.
The index `n` corresponds to the n-th set of parentheses in the pattern of `this` EReg. If no such sub-group exists, the result is unspecified.
If `n` equals 0, the whole matched substring is returned.
### `[matchedLeft](#matchedLeft)():[String](string "String - The basic String class.")`
Returns the part to the left of the last matched substring.
If the most recent call to `this.[match](#match)` or `this.[matchSub](#matchSub)` did not match anything, the result is unspecified.
If the global g modifier was in place for the matching, only the substring to the left of the leftmost match is returned.
The result does not include the matched part.
### `[matchedPos](#matchedPos)():{pos:[Int](int "Int - The standard Int type."), len:[Int](int "Int - The standard Int type.")}`
Returns the position and length of the last matched substring, within the String which was last used as argument to `this.[match](#match)` or `this.[matchSub](#matchSub)`.
If the most recent call to `this.[match](#match)` or `this.[matchSub](#matchSub)` did not match anything, the result is unspecified.
If the global g modifier was in place for the matching, the position and length of the leftmost substring is returned.
### `[matchedRight](#matchedRight)():[String](string "String - The basic String class.")`
Returns the part to the right of the last matched substring.
If the most recent call to `this.[match](#match)` or `this.[matchSub](#matchSub)` did not match anything, the result is unspecified.
If the global g modifier was in place for the matching, only the substring to the right of the leftmost match is returned.
The result does not include the matched part.
### `[replace](#replace)(s:[String](string "String - The basic String class."), by:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Replaces the first substring of `s` which `this` EReg matches with `by`.
If `this` EReg does not match any substring, the result is `s`.
By default, this method replaces only the first matched substring. If the global g modifier is in place, all matched substrings are replaced.
If `by` contains `$1` to `$9`, the digit corresponds to number of a matched sub-group and its value is used instead. If no such sub-group exists, the replacement is unspecified. The string `$$` becomes `$`.
If `s` or `by` are null, the result is unspecified.
### `[split](#split)(s:[String](string "String - The basic String class.")):[Array](array "Array")<[String](string "String - The basic String class.")>`
Splits String `s` at all substrings `this` EReg matches.
If a match is found at the start of `s`, the result contains a leading empty String "" entry.
If a match is found at the end of `s`, the result contains a trailing empty String "" entry.
If two matching substrings appear next to each other, the result contains the empty String `""` between them.
By default, this method splits `s` into two parts at the first matched substring. If the global g modifier is in place, `s` is split at each matched substring.
If `s` is null, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/EReg.htmlKeyValueIterator<K, V>
=======================
[no package](index)
*Available on all platforms*
A `[KeyValueIterator](keyvalueiterator)` is an `[Iterator](iterator)` that has a key and a value.
Alias
-----
*alias for* `[Iterator](iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<{value:V, key:K}>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/KeyValueIterator.htmlList<T>
========
[no package](index)
*Available on all platforms*
Alias
-----
*alias for* `[haxe.ds.List](haxe/ds/list "haxe.ds.List - A linked-list of elements.")<T>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/List.htmlMap<K, V>
==========
[no package](index)
*Available on all platforms*
Alias
-----
*alias for* `[haxe.ds.Map](haxe/ds/map "haxe.ds.Map - Map allows key to value mapping for arbitrary value types, and many key types.")<K, V>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Map.htmlMath
=====
[no package](index)
*Available on all platforms*
This class defines mathematical functions and constants.
See also:
* <https://haxe.org/manual/std-math.htmlStatic variables
----------------
### `staticread only[NEGATIVE_INFINITY](#NEGATIVE_INFINITY):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.") = cs.system.Double.NegativeInfinity`
*Available on cs, php, neko, cpp, macro, java, python, hl, flash*
A special `[Float](float)` constant which denotes negative infinity.
For example, this is the result of `-1.0 / 0.0`.
Operations with `NEGATIVE_INFINITY` as an operand may result in `NEGATIVE_INFINITY`, `POSITIVE_INFINITY` or `NaN`.
If this constant is converted to an `[Int](int)`, e.g. through `[Std.int](std#int)()`, the result is unspecified.
### `staticread only[NEGATIVE_INFINITY](#NEGATIVE_INFINITY):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
*Available on js, lua*
### `staticread only[NaN](#NaN):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.") = cs.system.Double.NaN`
*Available on cs, php, neko, cpp, macro, java, python, hl, flash*
A special `[Float](float)` constant which denotes an invalid number.
`NaN` stands for "Not a Number". It occurs when a mathematically incorrect operation is executed, such as taking the square root of a negative number: `[Math.sqrt](math#sqrt)(-1)`.
All further operations with `NaN` as an operand will result in `NaN`.
If this constant is converted to an `[Int](int)`, e.g. through `[Std.int](std#int)()`, the result is unspecified.
In order to test if a value is `NaN`, you should use `[Math.isNaN](math#isNaN)()` function.
### `staticread only[NaN](#NaN):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
*Available on js, lua*
### `staticread only[PI](#PI):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.") = cs.system.Math.PI`
*Available on cs, php, js, neko, cpp, macro, java, python, hl, flash*
Represents the ratio of the circumference of a circle to its diameter, specified by the constant, π. `PI` is approximately `3.141592653589793`.
### `staticread only[PI](#PI):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
*Available on lua*
### `staticread only[POSITIVE_INFINITY](#POSITIVE_INFINITY):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.") = cs.system.Double.PositiveInfinity`
*Available on cs, php, neko, cpp, macro, java, python, hl, flash*
A special `[Float](float)` constant which denotes positive infinity.
For example, this is the result of `1.0 / 0.0`.
Operations with `POSITIVE_INFINITY` as an operand may result in `NEGATIVE_INFINITY`, `POSITIVE_INFINITY` or `NaN`.
If this constant is converted to an `[Int](int)`, e.g. through `[Std.int](std#int)()`, the result is unspecified.
### `staticread only[POSITIVE_INFINITY](#POSITIVE_INFINITY):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
*Available on js, lua*
Static methods
--------------
### `static[abs](#abs)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the absolute value of `v`.
* If `v` is positive or `0`, the result is unchanged. Otherwise the result is `-v`.
* If `v` is `NEGATIVE_INFINITY` or `POSITIVE_INFINITY`, the result is `POSITIVE_INFINITY`.
* If `v` is `NaN`, the result is `NaN`.
### `static[acos](#acos)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the trigonometric arc cosine of the specified angle `v`, in radians.
If `v` is `NaN` or infinite, the result is `NaN`.
### `static[asin](#asin)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the trigonometric arc of the specified angle `v`, in radians.
If `v` is `NaN` or infinite, the result is `NaN`.
### `static[atan](#atan)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the trigonometric arc tangent of the specified angle `v`, in radians.
If `v` is `NaN` or infinite, the result is `NaN`.
### `static[atan2](#atan2)(y:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float."), x:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the trigonometric arc tangent whose tangent is the quotient of two specified numbers, in radians.
If parameter `x` or `y` is `NaN`, `NEGATIVE_INFINITY` or `POSITIVE_INFINITY`, the result is `NaN`.
### `static[ceil](#ceil)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Int](int "Int - The standard Int type.")`
Returns the smallest integer value that is not less than `v`.
If `v` is outside of the signed `[Int32](https://api.haxe.org/cs/system/Int32.html#Int32)` range, or is `NaN`, `NEGATIVE_INFINITY` or `POSITIVE_INFINITY`, the result is unspecified.
### `static[cos](#cos)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the trigonometric cosine of the specified angle `v`, in radians.
If `v` is `NaN` or infinite, the result is `NaN`.
### `static[exp](#exp)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns Euler's number, raised to the power of `v`.
`exp(1.0)` is approximately `2.718281828459`.
* If `v` is `POSITIVE_INFINITY`, the result is `POSITIVE_INFINITY`.
* If `v` is `NEGATIVE_INFINITY`, the result is `0.0`.
* If `v` is `NaN`, the result is `NaN`.
### `static[fceil](#fceil)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the smallest integer value that is not less than `v`, as a `[Float](float)`.
If `v` is is `NaN`, `NEGATIVE_INFINITY` or `POSITIVE_INFINITY`, the result is unspecified.
### `static[ffloor](#ffloor)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the largest integer value that is not greater than `v`, as a `[Float](float)`.
If `v` is is `NaN`, `NEGATIVE_INFINITY` or `POSITIVE_INFINITY`, the result is unspecified.
### `static[floor](#floor)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Int](int "Int - The standard Int type.")`
Returns the largest integer value that is not greater than `v`.
If `v` is outside of the signed `[Int32](https://api.haxe.org/cs/system/Int32.html#Int32)` range, or is `NaN`, `NEGATIVE_INFINITY` or `POSITIVE_INFINITY`, the result is unspecified.
### `static[fround](#fround)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Rounds `v` to the nearest integer value, as a Float.
Ties are rounded up, so that `0.5` becomes `1` and `-0.5` becomes `0`.
If `v` is is `NaN`, `NEGATIVE_INFINITY` or `POSITIVE_INFINITY`, the result is unspecified.
### `static[isFinite](#isFinite)(f:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `f` is a finite number.
If `f` is `POSITIVE_INFINITY`, `NEGATIVE_INFINITY` or `NaN`, the result is `[false](bool)`, otherwise the result is `[true](bool)`.
### `static[isNaN](#isNaN)(f:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `f` is not a valid number.
If `f` is `NaN`, the result is `[true](bool)`, otherwise the result is `[false](bool)`. In particular, both `POSITIVE_INFINITY` and `NEGATIVE_INFINITY` are not considered `NaN`.
### `static[log](#log)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the natural logarithm of `v`.
This is the mathematical inverse operation of exp, i.e. `log(exp(v)) == v` always holds.
* If `v` is negative (including `NEGATIVE_INFINITY`) or `NaN`, the result is `NaN`.
* If `v` is `POSITIVE_INFINITY`, the result is `POSITIVE_INFINITY`.
* If `v` is `0.0`, the result is `NEGATIVE_INFINITY`.
### `static[max](#max)(a:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float."), b:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the greater of values `a` and `b`.
* If `a` or `b` are `NaN`, the result is `NaN`.
* If `a` or `b` are `POSITIVE_INFINITY`, the result is `POSITIVE_INFINITY`.
* If `a` and `b` are `NEGATIVE_INFINITY`, the result is `NEGATIVE_INFINITY`.
### `static[min](#min)(a:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float."), b:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the smaller of values `a` and `b`.
* If `a` or `b` are `NaN`, the result is `NaN`.
* If `a` or `b` are `NEGATIVE_INFINITY`, the result is `NEGATIVE_INFINITY`.
* If `a` and `b` are `POSITIVE_INFINITY`, the result is `POSITIVE_INFINITY`.
### `static[pow](#pow)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float."), exp:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns a specified base `v` raised to the specified power `exp`.
### `static[random](#random)():[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns a pseudo-random number which is greater than or equal to `0.0`, and less than `1.0`.
### `static[round](#round)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Int](int "Int - The standard Int type.")`
Rounds `v` to the nearest integer value.
Ties are rounded up, so that `0.5` becomes `1` and `-0.5` becomes `0`.
If `v` is outside of the signed `[Int32](https://api.haxe.org/cs/system/Int32.html#Int32)` range, or is `NaN`, `NEGATIVE_INFINITY` or `POSITIVE_INFINITY`, the result is unspecified.
### `static[sin](#sin)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the trigonometric sine of the specified angle `v`, in radians.
If `v` is `NaN` or infinite, the result is `NaN`.
### `static[sqrt](#sqrt)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the square root of `v`.
* If `v` is negative (including `NEGATIVE_INFINITY`) or `NaN`, the result is `NaN`.
* If `v` is `POSITIVE_INFINITY`, the result is `POSITIVE_INFINITY`.
* If `v` is `0.0`, the result is `0.0`.
### `static[tan](#tan)(v:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the trigonometric tangent of the specified angle `v`, in radians.
If `v` is `NaN` or infinite, the result is `NaN`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Math.htmlNull<T>
========
[no package](index)
from T to T
*Available on all platforms*
`[Null](null)<T>` is a wrapper that can be used to make the basic types `[Int](int)`, `[Float](float)` and `[Bool](bool)` nullable on static targets.
If null safety is enabled, only types wrapped in `[Null](null)<T>` are nullable.
Otherwise, it has no effect on non-basic-types, but it can be useful as a way to document that `null` is an acceptable value for a method argument, return value or variable.
See also:
* <https://haxe.org/manual/types-nullability.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Null.htmlReflect
========
[no package](index)
*Available on all platforms*
The Reflect API is a way to manipulate values dynamically through an abstract interface in an untyped manner. Use with care.
See also:
* <https://haxe.org/manual/std-reflection.htmlStatic methods
--------------
### `static[callMethod](#callMethod)(o:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), func:[Function](haxe/function "haxe.Function - This type unifies with any function type."), args:[Array](array "Array")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
Call a method `func` with the given arguments `args`.
The object `o` is ignored in most cases. It serves as the `this`-context in the following situations:
* (neko) Allows switching the context to `o` in all cases.
* (macro) Same as neko for Haxe 3. No context switching in Haxe 4.
* (js, lua) Require the `o` argument if `func` does not, but should have a context. This can occur by accessing a function field natively, e.g. through `[Reflect.field](reflect#field)` or by using `(object : [Dynamic](dynamic)).field`. However, if `func` has a context, `o` is ignored like on other targets.
### `static[compare](#compare)<T>(a:T, b:T):[Int](int "Int - The standard Int type.")`
Compares `a` and `b`.
If `a` is less than `b`, the result is negative. If `b` is less than `a`, the result is positive. If `a` and `b` are equal, the result is 0.
This function is only defined if `a` and `b` are of the same type.
If that type is a function, the result is unspecified and `[Reflect.compareMethods](reflect#compareMethods)` should be used instead.
For all other types, the result is 0 if `a` and `b` are equal. If they are not equal, the result depends on the type and is negative if:
* Numeric types: a is less than b
* String: a is lexicographically less than b
* Other: unspecified
If `a` and `b` are null, the result is 0. If only one of them is null, the result is unspecified.
### `static[compareMethods](#compareMethods)(f1:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), f2:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Compares the functions `f1` and `f2`.
If `f1` or `f2` are null, the result is false. If `f1` or `f2` are not functions, the result is unspecified.
Otherwise the result is true if `f1` and the `f2` are physically equal, false otherwise.
If `f1` or `f2` are member method closures, the result is true if they are closures of the same method on the same object value, false otherwise.
### `static[copy](#copy)<T>(o:[Null](null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>):[Null](null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
Copies the fields of structure `o`.
This is only guaranteed to work on anonymous structures.
If `o` is null, the result is `null`.
### `static[deleteField](#deleteField)(o:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), field:[String](string "String - The basic String class.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Removes the field named `field` from structure `o`.
This method is only guaranteed to work on anonymous structures.
If `o` or `field` are null, the result is unspecified.
### `static[field](#field)(o:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), field:[String](string "String - The basic String class.")):[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
Returns the value of the field named `field` on object `o`.
If `o` is not an object or has no field named `field`, the result is null.
If the field is defined as a property, its accessors are ignored. Refer to `[Reflect.getProperty](reflect#getProperty)` for a function supporting property accessors.
If `field` is null, the result is unspecified.
### `static[fields](#fields)(o:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Array](array "Array")<[String](string "String - The basic String class.")>`
Returns the fields of structure `o`.
This method is only guaranteed to work on anonymous structures. Refer to `[Type.getInstanceFields](type#getInstanceFields)` for a function supporting class instances.
If `o` is null, the result is unspecified.
### `static[getProperty](#getProperty)(o:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), field:[String](string "String - The basic String class.")):[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
Returns the value of the field named `field` on object `o`, taking property getter functions into account.
If the field is not a property, this function behaves like `[Reflect.field](reflect#field)`, but might be slower.
If `o` or `field` are null, the result is unspecified.
### `static[hasField](#hasField)(o:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), field:[String](string "String - The basic String class.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if structure `o` has a field named `field`.
This is only guaranteed to work for anonymous structures. Refer to `[Type.getInstanceFields](type#getInstanceFields)` for a function supporting class instances.
If `o` or `field` are null, the result is unspecified.
### `static[isEnumValue](#isEnumValue)(v:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `v` is an enum value.
The result is true if `v` is of type EnumValue, i.e. an enum constructor.
Otherwise, including if `v` is null, the result is false.
### `static[isFunction](#isFunction)(f:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns true if `f` is a function, false otherwise.
If `f` is null, the result is false.
### `static[isObject](#isObject)(v:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `v` is an object.
The result is true if `v` is one of the following:
* class instance
* structure
* `[Class](class)<T>`
* `[Enum](enum)<T>`
Otherwise, including if `v` is null, the result is false.
### `static[makeVarArgs](#makeVarArgs)(f:[Array](array "Array")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")> ‑> [Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
### `static[makeVarArgs](#makeVarArgs)(f:[Array](array "Array")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")> ‑> [Void](void "Void - The standard Void type.")):[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
Transform a function taking an array of arguments into a function that can be called with any number of arguments.
### `static[setField](#setField)(o:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), field:[String](string "String - The basic String class."), value:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Void](void "Void - The standard Void type.")`
Sets the field named `field` of object `o` to value `value`.
If `o` has no field named `field`, this function is only guaranteed to work for anonymous structures.
If `o` or `field` are null, the result is unspecified.
### `static[setProperty](#setProperty)(o:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), field:[String](string "String - The basic String class."), value:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Void](void "Void - The standard Void type.")`
Sets the field named `field` of object `o` to value `value`, taking property setter functions into account.
If the field is not a property, this function behaves like `[Reflect.setField](reflect#setField)`, but might be slower.
If `field` is null, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Reflect.htmlSingle
=======
[no package](index)
from [Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.") to [Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")
*Available on cs, cpp, java, hl*
Single-precision IEEE 32bit float (4-byte).
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Single.htmlVoid
=====
[no package](index)
*Available on all platforms*
The standard `[Void](void)` type. Only `null` values can be of the type `[Void](void)`.
See also:
* <https://haxe.org/manual/types-void.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Void.htmlUnicodeString([String](string "String - The basic String class."))
===================================================================
[no package](index)
from [String](string "String - The basic String class.") to [String](string "String - The basic String class.")
*Available on all platforms*
This abstract provides consistent cross-target unicode support.
See also:
* <https://haxe.org/manual/std-UnicodeString.htmlStatic methods
--------------
### `static[validate](#validate)(b:[Bytes](haxe/io/bytes "haxe.io.Bytes"), encoding:[Encoding](haxe/io/encoding "haxe.io.Encoding")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `b` is a correctly encoded UTF8 byte sequence.
Variables
---------
### `read only[length](#length):[Int](int "Int - The standard Int type.")`
*Available on cs, js, cpp, java, hl, flash*
The number of characters in `this` String.
Methods
-------
### `[charAt](#charAt)(index:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
*Available on cs, js, cpp, java, hl, flash*
Returns the character at position `index` of `this` String.
If `index` is negative or exceeds `this.[length](#length)`, the empty String `""` is returned.
### `[charCodeAt](#charCodeAt)(index:[Int](int "Int - The standard Int type.")):[Null](null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Int](int "Int - The standard Int type.")>`
*Available on cs, js, cpp, java, hl, flash*
Returns the character code at position `index` of `this` String.
If `index` is negative or exceeds `this.[length](#length)`, `null` is returned.
### `[indexOf](#indexOf)(str:[String](string "String - The basic String class."), ?startIndex:[Int](int "Int - The standard Int type.")):[Int](int "Int - The standard Int type.")`
*Available on cs, js, cpp, java, hl, flash*
Returns the position of the leftmost occurrence of `str` within `this` String.
If `startIndex` is given, the search is performed within the substring of `this` String starting from `startIndex` (if `startIndex` is posivite or 0) or `max(this.[length](#length) + startIndex, 0)` (if `startIndex` is negative).
If `startIndex` exceeds `this.[length](#length)`, -1 is returned.
Otherwise the search is performed within `this` String. In either case, the returned position is relative to the beginning of `this` String.
If `str` cannot be found, -1 is returned.
### `inline[iterator](#iterator)():[StringIteratorUnicode](haxe/iterators/stringiteratorunicode "haxe.iterators.StringIteratorUnicode - This iterator can be used to iterate across strings in a cross-platform way.")`
*Available on cs, php, js, cpp, macro, java, lua, python, hl, flash*
Returns an iterator of the unicode code points.
### `inline[keyValueIterator](#keyValueIterator)():[StringKeyValueIteratorUnicode](haxe/iterators/stringkeyvalueiteratorunicode "haxe.iterators.StringKeyValueIteratorUnicode - This iterator can be used to iterate across strings in a cross-platform way.")`
*Available on cs, php, js, cpp, macro, java, lua, python, hl, flash*
Returns an iterator of the code point indices and unicode code points.
### `[lastIndexOf](#lastIndexOf)(str:[String](string "String - The basic String class."), ?startIndex:[Int](int "Int - The standard Int type.")):[Int](int "Int - The standard Int type.")`
*Available on cs, js, cpp, java, hl, flash*
Returns the position of the rightmost occurrence of `str` within `this` String.
If `startIndex` is given, the search is performed within the substring of `this` String from 0 to `startIndex + str.length`. Otherwise the search is performed within `this` String. In either case, the returned position is relative to the beginning of `this` String.
If `str` cannot be found, -1 is returned.
### `[substr](#substr)(pos:[Int](int "Int - The standard Int type."), ?len:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
*Available on cs, js, cpp, java, hl, flash*
Returns `len` characters of `this` String, starting at position `pos`.
If `len` is omitted, all characters from position `pos` to the end of `this` String are included.
If `pos` is negative, its value is calculated from the end of `this` String by `this.[length](#length) + pos`. If this yields a negative value, 0 is used instead.
If the calculated position + `len` exceeds `this.[length](#length)`, the characters from that position to the end of `this` String are returned.
If `len` is negative, the result is unspecified.
### `[substring](#substring)(startIndex:[Int](int "Int - The standard Int type."), ?endIndex:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
*Available on cs, js, cpp, java, hl, flash*
Returns the part of `this` String from `startIndex` to but not including `endIndex`.
If `startIndex` or `endIndex` are negative, 0 is used instead.
If `startIndex` exceeds `endIndex`, they are swapped.
If the (possibly swapped) `endIndex` is omitted or exceeds `this.[length](#length)`, `this.[length](#length)` is used instead.
If the (possibly swapped) `startIndex` exceeds `this.[length](#length)`, the empty String `""` is returned.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/UnicodeString.htmlValueType
==========
[no package](index)
import [Type](type)
*Available on all platforms*
The different possible runtime types of a value.
Values
------
### `TNull`
### `TInt`
### `TFloat`
### `TBool`
### `TObject`
### `TFunction`
### `TClass(c:[Class](class "Class - An abstract type that represents a Class.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>)`
### `TEnum(e:[Enum](enum "Enum - An abstract type that represents an Enum type.")<[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>)`
### `TUnknown`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/ValueType.htmlStd
====
[no package](index)
*Available on all platforms*
The Std class provides standard methods for manipulating basic types.
Static methods
--------------
### `static[downcast](#downcast)<T, S>(value:T, c:[Class](class "Class - An abstract type that represents a Class.")<S>):S`
Checks if object `value` is an instance of class or interface `c`.
Compiles only if the type specified by `c` can be assigned to the type of `value`.
This method checks if a downcast is possible. That is, if the runtime type of `value` is assignable to the type specified by `c`, `value` is returned. Otherwise null is returned.
This method is not guaranteed to work with core types such as `[String](string)`, `[Array](array)` and `[Date](date)`.
If `value` is null, the result is null. If `c` is null, the result is unspecified.
### `static[instance](#instance)<T, S>(value:T, c:[Class](class "Class - An abstract type that represents a Class.")<S>):S`
**Deprecated:** "Std.instance() is deprecated. Use Std.downcast() instead."
### `static[int](#int)(x:[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Int](int "Int - The standard Int type.")`
Converts a `[Float](float)` to an `[Int](int)`, rounded towards 0.
If `x` is outside of the signed Int32 range, or is `NaN`, `NEGATIVE_INFINITY` or `POSITIVE_INFINITY`, the result is unspecified.
### `static[is](#is)(v:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), t:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
DEPRECATED. Use `[Std.isOfType](std#isOfType)(v, t)` instead.
Tells if a value `v` is of the type `t`. Returns `[false](bool)` if `v` or `t` are null.
If `t` is a class or interface with `@:generic` meta, the result is `[false](bool)`.
### `static[isOfType](#isOfType)(v:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), t:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if a value `v` is of the type `t`. Returns `[false](bool)` if `v` or `t` are null.
If `t` is a class or interface with `@:generic` meta, the result is `[false](bool)`.
### `static[parseFloat](#parseFloat)(x:[String](string "String - The basic String class.")):[Float](float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Converts a `[String](string)` to a `[Float](float)`.
The parsing rules for `parseInt` apply here as well, with the exception of invalid input resulting in a `NaN` value instead of null.
Additionally, decimal notation may contain a single `.` to denote the start of the fractions.
### `static[parseInt](#parseInt)(x:[String](string "String - The basic String class.")):[Null](null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Int](int "Int - The standard Int type.")>`
Converts a `[String](string)` to an `[Int](int)`.
Leading whitespaces are ignored.
If `x` starts with 0x or 0X, hexadecimal notation is recognized where the following digits may contain 0-9 and A-F.
Otherwise `x` is read as decimal number with 0-9 being allowed characters. `x` may also start with a - to denote a negative value.
In decimal mode, parsing continues until an invalid character is detected, in which case the result up to that point is returned. For hexadecimal notation, the effect of invalid characters is unspecified.
Leading 0s that are not part of the 0x/0X hexadecimal notation are ignored, which means octal notation is not supported.
If `x` is null, the result is unspecified. If `x` cannot be parsed as integer, the result is `null`.
### `static[random](#random)(x:[Int](int "Int - The standard Int type.")):[Int](int "Int - The standard Int type.")`
Return a random integer between 0 included and `x` excluded.
If `x <= 1`, the result is always 0.
### `static[string](#string)(s:[Dynamic](dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[String](string "String - The basic String class.")`
Converts any value to a String.
If `s` is of `[String](string)`, `[Int](int)`, `[Float](float)` or `[Bool](bool)`, its value is returned.
If `s` is an instance of a class and that class or one of its parent classes has a `toString` method, that method is called. If no such method is present, the result is unspecified.
If `s` is an enum constructor without argument, the constructor's name is returned. If arguments exists, the constructor's name followed by the String representations of the arguments is returned.
If `s` is a structure, the field names along with their values are returned. The field order and the operator separating field names and values are unspecified.
If s is null, "null" is returned.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Std.htmlString
=======
[no package](index)
*Available on all platforms*
The basic String class.
A Haxe String is immutable, it is not possible to modify individual characters. No method of this class changes the state of `this` String.
Strings can be constructed using the String literal syntax `"string value"`.
String can be concatenated by using the `+` operator. If an operand is not a String, it is passed through `[Std.string](std#string)()` first.
See also:
* <https://haxe.org/manual/std-String.htmlStatic methods
--------------
### `static[fromCharCode](#fromCharCode)(code:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
Returns the String corresponding to the character code `code`.
If `code` is negative or has another invalid value, the result is unspecified.
Constructor
-----------
### `[new](#new)(string:[String](string "String - The basic String class."))`
Creates a copy from a given String.
Variables
---------
### `read only[length](#length):[Int](int "Int - The standard Int type.")`
The number of characters in `this` String.
Methods
-------
### `[charAt](#charAt)(index:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
Returns the character at position `index` of `this` String.
If `index` is negative or exceeds `this.[length](#length)`, the empty String `""` is returned.
### `[charCodeAt](#charCodeAt)(index:[Int](int "Int - The standard Int type.")):[Null](null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Int](int "Int - The standard Int type.")>`
Returns the character code at position `index` of `this` String.
If `index` is negative or exceeds `this.[length](#length)`, `null` is returned.
To obtain the character code of a single character, `"x".code` can be used instead to inline the character code at compile time. Note that this only works on String literals of length 1.
### `[indexOf](#indexOf)(str:[String](string "String - The basic String class."), ?startIndex:[Int](int "Int - The standard Int type.")):[Int](int "Int - The standard Int type.")`
Returns the position of the leftmost occurrence of `str` within `this` String.
If `startIndex` is given, the search is performed within the substring of `this` String starting from `startIndex`.
If `startIndex` exceeds `this.[length](#length)`, -1 is returned.
If `startIndex` is negative, the result is unspecifed.
Otherwise the search is performed within `this` String. In either case, the returned position is relative to the beginning of `this` String.
If `str` cannot be found, -1 is returned.
### `[lastIndexOf](#lastIndexOf)(str:[String](string "String - The basic String class."), ?startIndex:[Int](int "Int - The standard Int type.")):[Int](int "Int - The standard Int type.")`
Returns the position of the rightmost occurrence of `str` within `this` String.
If `startIndex` is given, the search is performed within the substring of `this` String from 0 to `startIndex + str.length`. Otherwise the search is performed within `this` String. In either case, the returned position is relative to the beginning of `this` String.
If `startIndex` is negative, the result is unspecifed.
If `str` cannot be found, -1 is returned.
### `[split](#split)(delimiter:[String](string "String - The basic String class.")):[Array](array "Array")<[String](string "String - The basic String class.")>`
Splits `this` String at each occurrence of `delimiter`.
If `this` String is the empty String `""`, the result is not consistent across targets and may either be `[]` (on Js, Cpp) or `[""]`.
If `delimiter` is the empty String `""`, `this` String is split into an Array of `this.[length](#length)` elements, where the elements correspond to the characters of `this` String.
If `delimiter` is not found within `this` String, the result is an Array with one element, which equals `this` String.
If `delimiter` is null, the result is unspecified.
Otherwise, `this` String is split into parts at each occurrence of `delimiter`. If `this` String starts (or ends) with `delimiter`, the result `[Array](array)` contains a leading (or trailing) empty String `""` element. Two subsequent delimiters also result in an empty String `""` element.
### `[substr](#substr)(pos:[Int](int "Int - The standard Int type."), ?len:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
Returns `len` characters of `this` String, starting at position `pos`.
If `len` is omitted, all characters from position `pos` to the end of `this` String are included.
If `pos` is negative, its value is calculated from the end of `this` String by `this.[length](#length) + pos`. If this yields a negative value, 0 is used instead.
If the calculated position + `len` exceeds `this.[length](#length)`, the characters from that position to the end of `this` String are returned.
If `len` is negative, the result is unspecified.
### `[substring](#substring)(startIndex:[Int](int "Int - The standard Int type."), ?endIndex:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
Returns the part of `this` String from `startIndex` to but not including `endIndex`.
If `startIndex` or `endIndex` are negative, 0 is used instead.
If `startIndex` exceeds `endIndex`, they are swapped.
If the (possibly swapped) `endIndex` is omitted or exceeds `this.[length](#length)`, `this.[length](#length)` is used instead.
If the (possibly swapped) `startIndex` exceeds `this.[length](#length)`, the empty String `""` is returned.
### `[toLowerCase](#toLowerCase)():[String](string "String - The basic String class.")`
Returns a String where all characters of `this` String are lower case.
### `[toString](#toString)():[String](string "String - The basic String class.")`
Returns the String itself.
### `[toUpperCase](#toUpperCase)():[String](string "String - The basic String class.")`
Returns a String where all characters of `this` String are upper case.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/String.htmlStringBuf
==========
[no package](index)
*Available on all platforms*
A String buffer is an efficient way to build a big string by appending small elements together.
Unlike String, an instance of StringBuf is not immutable in the sense that it can be passed as argument to functions which modify it by appending more values.
Constructor
-----------
### `[new](#new)()`
Creates a new StringBuf instance.
This may involve initialization of the internal buffer.
Variables
---------
### `read only[length](#length):[Int](int "Int - The standard Int type.")`
The length of `this` StringBuf in characters.
Methods
-------
### `[add](#add)<T>(x:T):[Void](void "Void - The standard Void type.")`
Appends the representation of `x` to `this` StringBuf.
The exact representation of `x` may vary per platform. To get more consistent behavior, this function should be called with Std.string(x).
If `x` is null, the String "null" is appended.
### `[addChar](#addChar)(c:[Int](int "Int - The standard Int type.")):[Void](void "Void - The standard Void type.")`
Appends the character identified by `c` to `this` StringBuf.
If `c` is negative or has another invalid value, the result is unspecified.
### `[addSub](#addSub)(s:[String](string "String - The basic String class."), pos:[Int](int "Int - The standard Int type."), ?len:[Int](int "Int - The standard Int type.")):[Void](void "Void - The standard Void type.")`
Appends a substring of `s` to `this` StringBuf.
This function expects `pos` and `len` to describe a valid substring of `s`, or else the result is unspecified. To get more robust behavior, `this.[add](#add)(s.substr(pos,len))` can be used instead.
If `s` or `pos` are null, the result is unspecified.
If `len` is omitted or null, the substring ranges from `pos` to the end of `s`.
### `[toString](#toString)():[String](string "String - The basic String class.")`
Returns the content of `this` StringBuf as String.
The buffer is not emptied by this operation.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/StringBuf.htmlStringTools
============
[no package](index)
*Available on all platforms*
This class provides advanced methods on Strings. It is ideally used with `using [StringTools](stringtools)` and then acts as an [extension](https://haxe.org/manual/lf-static-extension.html) to the `[String](string)` class.
If the first argument to any of the methods is null, the result is unspecified.
Static methods
--------------
### `staticinline[contains](#contains)(s:[String](string "String - The basic String class."), value:[String](string "String - The basic String class.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns `[true](bool)` if `s` contains `value` and `[false](bool)` otherwise.
When `value` is `null`, the result is unspecified.
### `static[endsWith](#endsWith)(s:[String](string "String - The basic String class."), end:[String](string "String - The basic String class.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if the string `s` ends with the string `end`.
If `end` is `null`, the result is unspecified.
If `end` is the empty String `""`, the result is true.
### `static[fastCodeAt](#fastCodeAt)(s:[String](string "String - The basic String class."), index:[Int](int "Int - The standard Int type.")):[Int](int "Int - The standard Int type.")`
Returns the character code at position `index` of String `s`, or an end-of-file indicator at if `position` equals `s.length`.
This method is faster than `[String.charCodeAt](string#charCodeAt)()` on some platforms, but the result is unspecified if `index` is negative or greater than `s.length`.
End of file status can be checked by calling `[StringTools.isEof](stringtools#isEof)()` with the returned value as argument.
This operation is not guaranteed to work if `s` contains the `\0` character.
### `static[hex](#hex)(n:[Int](int "Int - The standard Int type."), ?digits:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
Encodes `n` into a hexadecimal representation.
If `digits` is specified, the resulting String is padded with "0" until its `length` equals `digits`.
### `static[htmlEscape](#htmlEscape)(s:[String](string "String - The basic String class."), ?quotes:[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")):[String](string "String - The basic String class.")`
Escapes HTML special characters of the string `s`.
The following replacements are made:
* `&` becomes `&`;
* `<` becomes `<`;
* `>` becomes `>`;
If `quotes` is true, the following characters are also replaced:
* `"` becomes `"`;
* `'` becomes `'`;
### `static[htmlUnescape](#htmlUnescape)(s:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Unescapes HTML special characters of the string `s`.
### This is the inverse operation to htmlEscape, i.e. the following always holds: `htmlUnescape(htmlEscape(s)) == s`
The replacements follow:
* `&` becomes `&`
* `<` becomes `<`
* `>` becomes `>`
* `"` becomes `"`
* `'` becomes `'`
### `staticinline[isEof](#isEof)(c:[Int](int "Int - The standard Int type.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `c` represents the end-of-file (EOF) character.
### `static[isSpace](#isSpace)(s:[String](string "String - The basic String class."), pos:[Int](int "Int - The standard Int type.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if the character in the string `s` at position `pos` is a space.
A character is considered to be a space character if its character code is 9,10,11,12,13 or 32.
If `s` is the empty String `""`, or if pos is not a valid position within `s`, the result is false.
### `staticinline[iterator](#iterator)(s:[String](string "String - The basic String class.")):[StringIterator](haxe/iterators/stringiterator "haxe.iterators.StringIterator - This iterator can be used to iterate over char codes in a string.")`
Returns an iterator of the char codes.
Note that char codes may differ across platforms because of different internal encoding of strings in different runtimes. For the consistent cross-platform UTF8 char codes see `[haxe.iterators.StringIteratorUnicode](haxe/iterators/stringiteratorunicode#StringIteratorUnicode)`.
### `staticinline[keyValueIterator](#keyValueIterator)(s:[String](string "String - The basic String class.")):[StringKeyValueIterator](haxe/iterators/stringkeyvalueiterator "haxe.iterators.StringKeyValueIterator - This iterator can be used to iterate over char indexes and char codes in a string.")`
Returns an iterator of the char indexes and codes.
Note that char codes may differ across platforms because of different internal encoding of strings in different of runtimes. For the consistent cross-platform UTF8 char codes see `[haxe.iterators.StringKeyValueIteratorUnicode](haxe/iterators/stringkeyvalueiteratorunicode#StringKeyValueIteratorUnicode)`.
### `static[lpad](#lpad)(s:[String](string "String - The basic String class."), c:[String](string "String - The basic String class."), l:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
Concatenates `c` to `s` until `s.length` is at least `l`.
If `c` is the empty String `""` or if `l` does not exceed `s.length`, `s` is returned unchanged.
If `c.length` is 1, the resulting String length is exactly `l`.
Otherwise the length may exceed `l`.
If `c` is null, the result is unspecified.
### `static[ltrim](#ltrim)(s:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Removes leading space characters of `s`.
This function internally calls `isSpace()` to decide which characters to remove.
If `s` is the empty String `""` or consists only of space characters, the result is the empty String `""`.
### `static[replace](#replace)(s:[String](string "String - The basic String class."), sub:[String](string "String - The basic String class."), by:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Replace all occurrences of the String `sub` in the String `s` by the String `by`.
If `sub` is the empty String `""`, `by` is inserted after each character of `s` except the last one. If `by` is also the empty String `""`, `s` remains unchanged.
If `sub` or `by` are null, the result is unspecified.
### `static[rpad](#rpad)(s:[String](string "String - The basic String class."), c:[String](string "String - The basic String class."), l:[Int](int "Int - The standard Int type.")):[String](string "String - The basic String class.")`
Appends `c` to `s` until `s.length` is at least `l`.
If `c` is the empty String `""` or if `l` does not exceed `s.length`, `s` is returned unchanged.
If `c.length` is 1, the resulting String length is exactly `l`.
Otherwise the length may exceed `l`.
If `c` is null, the result is unspecified.
### `static[rtrim](#rtrim)(s:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Removes trailing space characters of `s`.
This function internally calls `isSpace()` to decide which characters to remove.
If `s` is the empty String `""` or consists only of space characters, the result is the empty String `""`.
### `static[startsWith](#startsWith)(s:[String](string "String - The basic String class."), start:[String](string "String - The basic String class.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if the string `s` starts with the string `start`.
If `start` is `null`, the result is unspecified.
If `start` is the empty String `""`, the result is true.
### `static[trim](#trim)(s:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Removes leading and trailing space characters of `s`.
This is a convenience function for `ltrim(rtrim(s))`.
### `static[urlDecode](#urlDecode)(s:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Decode an URL using the standard format.
### `static[urlEncode](#urlEncode)(s:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Encode an URL by using the standard format.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/StringTools.htmlUInt([Int](int "Int - The standard Int type."))
================================================
[no package](index)
from [Int](int "Int - The standard Int type.") to [Int](int "Int - The standard Int type."),
*Available on all platforms*
The unsigned `[Int](int)` type is only defined for Flash and C#. Simulate it for other platforms.
See also:
* <https://haxe.org/manual/types-basic-types.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/UInt.htmlXml
====
[no package](index)
*Available on all platforms*
Cross-platform Xml API.
See also:
* <https://haxe.org/manual/std-Xml.htmlStatic variables
----------------
### `staticread only[CData](#CData):[XmlType](xmltype "XmlType - Xml node types.") = XmlType.CData`
XML character data type.
### `staticread only[Comment](#Comment):[XmlType](xmltype "XmlType - Xml node types.") = XmlType.Comment`
XML comment type.
### `staticread only[DocType](#DocType):[XmlType](xmltype "XmlType - Xml node types.") = XmlType.DocType`
XML doctype element type.
### `staticread only[Document](#Document):[XmlType](xmltype "XmlType - Xml node types.") = XmlType.Document`
XML document type.
### `staticread only[Element](#Element):[XmlType](xmltype "XmlType - Xml node types.") = XmlType.Element`
XML element type.
### `staticread only[PCData](#PCData):[XmlType](xmltype "XmlType - Xml node types.") = XmlType.PCData`
XML parsed character data type.
### `staticread only[ProcessingInstruction](#ProcessingInstruction):[XmlType](xmltype "XmlType - Xml node types.") = XmlType.ProcessingInstruction`
XML processing instruction type.
Static methods
--------------
### `static[createCData](#createCData)(data:[String](string "String - The basic String class.")):[Xml](xml "Xml - Cross-platform Xml API.")`
Creates a node of the given type.
### `static[createComment](#createComment)(data:[String](string "String - The basic String class.")):[Xml](xml "Xml - Cross-platform Xml API.")`
Creates a node of the given type.
### `static[createDocType](#createDocType)(data:[String](string "String - The basic String class.")):[Xml](xml "Xml - Cross-platform Xml API.")`
Creates a node of the given type.
### `static[createDocument](#createDocument)():[Xml](xml "Xml - Cross-platform Xml API.")`
Creates a node of the given type.
### `static[createElement](#createElement)(name:[String](string "String - The basic String class.")):[Xml](xml "Xml - Cross-platform Xml API.")`
Creates a node of the given type.
### `static[createPCData](#createPCData)(data:[String](string "String - The basic String class.")):[Xml](xml "Xml - Cross-platform Xml API.")`
Creates a node of the given type.
### `static[createProcessingInstruction](#createProcessingInstruction)(data:[String](string "String - The basic String class.")):[Xml](xml "Xml - Cross-platform Xml API.")`
Creates a node of the given type.
### `static[parse](#parse)(str:[String](string "String - The basic String class.")):[Xml](xml "Xml - Cross-platform Xml API.")`
Parses the String into an Xml document.
Variables
---------
### `[nodeName](#nodeName):[String](string "String - The basic String class.")`
Returns the node name of an Element.
### `read only[nodeType](#nodeType):[XmlType](xmltype "XmlType - Xml node types.")`
Returns the type of the Xml Node. This should be used before accessing other functions since some might raise an exception if the node type is not correct.
### `[nodeValue](#nodeValue):[String](string "String - The basic String class.")`
Returns the node value. Only works if the Xml node is not an Element or a Document.
### `read only[parent](#parent):[Xml](xml "Xml - Cross-platform Xml API.")`
Returns the parent object in the Xml hierarchy. The parent can be `null`, an Element or a Document.
Methods
-------
### `[addChild](#addChild)(x:[Xml](xml "Xml - Cross-platform Xml API.")):[Void](void "Void - The standard Void type.")`
Adds a child node to the Document or Element. A child node can only be inside one given parent node, which is indicated by the `parent` property. If the child is already inside this Document or Element, it will be moved to the last position among the Document or Element's children. If the child node was previously inside a different node, it will be moved to this Document or Element.
### `[attributes](#attributes)():[Iterator](iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<[String](string "String - The basic String class.")>`
Returns an `[Iterator](iterator)` on all the attribute names.
### `[elements](#elements)():[Iterator](iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<[Xml](xml "Xml - Cross-platform Xml API.")>`
Returns an iterator of all child nodes which are Elements. Only works if the current node is an Element or a Document.
### `[elementsNamed](#elementsNamed)(name:[String](string "String - The basic String class.")):[Iterator](iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<[Xml](xml "Xml - Cross-platform Xml API.")>`
Returns an iterator of all child nodes which are Elements with the given nodeName. Only works if the current node is an Element or a Document.
### `[exists](#exists)(att:[String](string "String - The basic String class.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if the Element node has a given attribute. Attributes are case-sensitive.
### `inline[firstChild](#firstChild)():[Xml](xml "Xml - Cross-platform Xml API.")`
Returns the first child node.
### `[firstElement](#firstElement)():[Xml](xml "Xml - Cross-platform Xml API.")`
Returns the first child node which is an Element.
### `[get](#get)(att:[String](string "String - The basic String class.")):[String](string "String - The basic String class.")`
Get the given attribute of an Element node. Returns `null` if not found. Attributes are case-sensitive.
### `[insertChild](#insertChild)(x:[Xml](xml "Xml - Cross-platform Xml API."), pos:[Int](int "Int - The standard Int type.")):[Void](void "Void - The standard Void type.")`
Inserts a child at the given position among the other childs. A child node can only be inside one given parent node, which is indicated by the [parent] property. If the child is already inside this Document or Element, it will be moved to the new position among the Document or Element's children. If the child node was previously inside a different node, it will be moved to this Document or Element.
### `inline[iterator](#iterator)():[Iterator](iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<[Xml](xml "Xml - Cross-platform Xml API.")>`
Returns an iterator of all child nodes. Only works if the current node is an Element or a Document.
### `[remove](#remove)(att:[String](string "String - The basic String class.")):[Void](void "Void - The standard Void type.")`
Removes an attribute for an Element node. Attributes are case-sensitive.
### `[removeChild](#removeChild)(x:[Xml](xml "Xml - Cross-platform Xml API.")):[Bool](bool "Bool - The standard Boolean type, which can either be true or false.")`
Removes a child from the Document or Element. Returns true if the child was successfuly removed.
### `[set](#set)(att:[String](string "String - The basic String class."), value:[String](string "String - The basic String class.")):[Void](void "Void - The standard Void type.")`
Set the given attribute value for an Element node. Attributes are case-sensitive.
### `inline[toString](#toString)():[String](string "String - The basic String class.")`
Returns a String representation of the Xml node.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/Xml.htmlXmlType([Int](int "Int - The standard Int type."))
===================================================
[no package](index)
import [Xml](xml)
*Available on all platforms*
Xml node types.
See also:
* <https://haxe.org/manual/std-Xml.htmlVariables
---------
### `inlineread only[CData](#CData):[XmlType](xmltype "XmlType - Xml node types.") = 2`
Represents XML character data type.
### `inlineread only[Comment](#Comment):[XmlType](xmltype "XmlType - Xml node types.") = 3`
Represents an XML comment type.
### `inlineread only[DocType](#DocType):[XmlType](xmltype "XmlType - Xml node types.") = 4`
Represents an XML doctype element type.
### `inlineread only[Document](#Document):[XmlType](xmltype "XmlType - Xml node types.") = 6`
Represents an XML document type.
### `inlineread only[Element](#Element):[XmlType](xmltype "XmlType - Xml node types.") = 0`
Represents an XML element type.
### `inlineread only[PCData](#PCData):[XmlType](xmltype "XmlType - Xml node types.") = 1`
Represents XML parsed character data type.
### `inlineread only[ProcessingInstruction](#ProcessingInstruction):[XmlType](xmltype "XmlType - Xml node types.") = 5`
Represents an XML processing instruction type.
Methods
-------
### `[toString](#toString)():[String](string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/XmlType.htmlCallStack([Array](../array "Array")<[StackItem](stackitem "haxe.StackItem - Elements return by CallStack methods.")>)
======================================================================================================================
package [haxe](index)
from [Array](../array "Array")<[StackItem](stackitem "haxe.StackItem - Elements return by CallStack methods.")*Available on all platforms*
Get information about the call stack.
Static methods
--------------
### `static[callStack](#callStack)():[Array](../array "Array")<[StackItem](stackitem "haxe.StackItem - Elements return by CallStack methods.")>`
Return the call stack elements, or an empty array if not available.
### `static[exceptionStack](#exceptionStack)():[Array](../array "Array")<[StackItem](stackitem "haxe.StackItem - Elements return by CallStack methods.")>`
Return the exception stack : this is the stack elements between the place the last exception was thrown and the place it was caught, or an empty array if not available.
May not work if catch type was a derivative from `[haxe.Exception](exception#Exception)`.
### `static[toString](#toString)(stack:[CallStack](callstack "haxe.CallStack - Get information about the call stack.")):[String](../string "String - The basic String class.")`
Returns a representation of the stack as a printable string.
Variables
---------
### `read only[length](#length):[Int](../int "Int - The standard Int type.")`
The length of this stack.
Methods
-------
### `inline[copy](#copy)():[CallStack](callstack "haxe.CallStack - Get information about the call stack.")`
Make a copy of the stack.
### `inline[get](#get)(index:[Int](../int "Int - The standard Int type.")):[StackItem](stackitem "haxe.StackItem - Elements return by CallStack methods.")`
### `[subtract](#subtract)(stack:[CallStack](callstack "haxe.CallStack - Get information about the call stack.")):[CallStack](callstack "haxe.CallStack - Get information about the call stack.")`
Returns a range of entries of current stack from the beginning to the the common part of this and `stack`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/CallStack.htmlConstructible<T>([Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."))
========================================================================================================================
package [haxe](index)
import [haxe.Constraints](constraints)
*Available on all platforms*
This type unifies with any instance of classes that have a constructor which
* is `public` and
* unifies with the type used for type parameter `T`.
If a type parameter `A` is assigned to a type parameter `B` which is constrained to `[Constructible](constructible#Constructible)<T>`, A must be explicitly constrained to `[Constructible](constructible#Constructible)<T>` as well.
It is intended to be used as a type parameter constraint. If used as a real type, the underlying type will be `[Dynamic](../dynamic)`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Constructible.htmlDynamicAccess<T>([Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")<T>)
===========================================================================================================================
package [haxe](index)
from [Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")<T> to [Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")<T*Available on all platforms*
DynamicAccess is an abstract type for working with anonymous structures that are intended to hold collections of objects by the string key.
For example, these types of structures are often created from JSON.
Basically, it wraps `[Reflect](../reflect)` calls in a `[Map](../map)`-like interface.
Methods
-------
### `inline[copy](#copy)():[DynamicAccess](dynamicaccess "haxe.DynamicAccess - DynamicAccess is an abstract type for working with anonymous structures that are intended to hold collections of objects by the string key.")<T>`
Returns a shallow copy of the structure
### `inline[exists](#exists)(key:[String](../string "String - The basic String class.")):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if the structure contains a specified `key`.
If `key` is `null`, the result is unspecified.
### `inline[get](#get)(key:[String](../string "String - The basic String class.")):[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
Returns a value by specified `key`.
If the structure does not contain the given key, `null` is returned.
If `key` is `null`, the result is unspecified.
### `inline[iterator](#iterator)():[DynamicAccessIterator](iterators/dynamicaccessiterator "haxe.iterators.DynamicAccessIterator - This iterator can be used to iterate over the values of haxe.")<T>`
Returns an Iterator over the values of this `[DynamicAccess](dynamicaccess#DynamicAccess)`.
The order of values is undefined.
### `inline[keyValueIterator](#keyValueIterator)():[DynamicAccessKeyValueIterator](iterators/dynamicaccesskeyvalueiterator "haxe.iterators.DynamicAccessKeyValueIterator - This Key/Value iterator can be used to iterate over haxe.")<T>`
Returns an Iterator over the keys and values of this `[DynamicAccess](dynamicaccess#DynamicAccess)`.
The order of values is undefined.
### `inline[keys](#keys)():[Array](../array "Array")<[String](../string "String - The basic String class.")>`
Returns an array of `keys` in a structure.
### `inline[remove](#remove)(key:[String](../string "String - The basic String class.")):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
Removes a specified `key` from the structure.
Returns true, if `key` was present in structure, or false otherwise.
If `key` is `null`, the result is unspecified.
### `inline[set](#set)(key:[String](../string "String - The basic String class."), value:T):T`
Sets a `value` for a specified `key`.
If the structure contains the given key, its value will be overwritten.
Returns the given value.
If `key` is `null`, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/DynamicAccess.htmlEntryPoint
===========
package [haxe](index)
*Available on all platforms*
If `[haxe.MainLoop](mainloop#MainLoop)` is kept from DCE, then we will insert an `[haxe.EntryPoint.run](entrypoint#run)()` call just at then end of `main()`. This class can be redefined by custom frameworks so they can handle their own main loop logic.
Static variables
----------------
### `staticread only[threadCount](#threadCount):[Int](../int "Int - The standard Int type.") = 0`
Static methods
--------------
### `static[addThread](#addThread)(f:() ‑> [Void](../void "Void - The standard Void type.")):[Void](../void "Void - The standard Void type.")`
### `static[run](#run)():[Void](../void "Void - The standard Void type.")`
Start the main loop. Depending on the platform, this can return immediately or will only return when the application exits.
### `static[runInMainThread](#runInMainThread)(f:() ‑> [Void](../void "Void - The standard Void type.")):[Void](../void "Void - The standard Void type.")`
### `static[wakeup](#wakeup)():[Void](../void "Void - The standard Void type.")`
### Wakeup a sleeping `run()`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/EntryPoint.htmlMainLoop
=========
package [haxe](index)
*Available on all platforms*
Static variables
----------------
### `staticread only[threadCount](#threadCount):[Int](../int "Int - The standard Int type.")`
Static methods
--------------
### `static[add](#add)(f:() ‑> [Void](../void "Void - The standard Void type."), priority:[Int](../int "Int - The standard Int type.") = 0):[MainEvent](mainevent "haxe.MainEvent")`
Add a pending event to be run into the main loop.
### `static[addThread](#addThread)(f:() ‑> [Void](../void "Void - The standard Void type.")):[Void](../void "Void - The standard Void type.")`
### `static[hasEvents](#hasEvents)():[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `static[runInMainThread](#runInMainThread)(f:() ‑> [Void](../void "Void - The standard Void type.")):[Void](../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/MainLoop.htmlEnumFlags<T>([Int](../int "Int - The standard Int type."))
===========================================================
package [haxe](index)
*Available on all platforms*
A typed interface for bit flags. This is not a real object, only a typed interface for an actual Int. Each flag can be tested/set with the corresponding enum instance. Up to 32 flags can be stored that way.
Enum constructor indices are preserved from Haxe syntax, so the first declared is index 0, the next index 1 etc. The methods are optimized if the enum instance is passed directly, e.g. as `has(EnumCtor)`. Otherwise `[Type.enumIndex](../type#enumIndex)()` reflection is used.
Static methods
--------------
### `staticinline[ofInt](#ofInt)<T>(i:[Int](../int "Int - The standard Int type.")):[EnumFlags](enumflags "haxe.EnumFlags - A typed interface for bit flags.")<T>`
Convert a integer bitflag into a typed one (this is a no-op, it does not have any impact on speed).
Methods
-------
### `inline[has](#has)(v:T):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
Checks if the index of enum instance `v` is set.
This method is optimized if `v` is an enum instance expression such as `SomeEnum.SomeCtor`.
If `v` is `null`, the result is unspecified.
### `inline[set](#set)(v:T):[Void](../void "Void - The standard Void type.")`
Sets the index of enum instance `v`.
This method is optimized if `v` is an enum instance expression such as `SomeEnum.SomeCtor`.
If `v` is `null`, the result is unspecified.
### `inline[toInt](#toInt)():[Int](../int "Int - The standard Int type.")`
Convert the typed bitflag into the corresponding int value (this is a no-op, it doesn't have any impact on speed).
### `inline[unset](#unset)(v:T):[Void](../void "Void - The standard Void type.")`
Unsets the index of enum instance `v`.
This method is optimized if `v` is an enum instance expression such as `SomeEnum.SomeCtor`.
If `v` is `null`, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/EnumFlags.htmlEnumTools
==========
package [haxe](index)
*Available on all platforms*
This class provides advanced methods on enums. It is ideally used with `using [EnumTools](enumtools#EnumTools)` and then acts as an [extension](https://haxe.org/manual/lf-static-extension.html) to the `enum` types.
If the first argument to any of the methods is `null`, the result is unspecified.
Static methods
--------------
### `staticinline[createAll](#createAll)<T>(e:[Enum](../enum "Enum - An abstract type that represents an Enum type.")<T>):[Array](../array "Array")<T>`
Returns a list of all constructors of enum `e` that require no arguments.
This may return the empty Array `[]` if all constructors of `e` require arguments.
Otherwise an instance of `e` constructed through each of its non- argument constructors is returned, in the order of the constructor declaration.
If `e` is `null`, the result is unspecified.
### `staticinline[createByIndex](#createByIndex)<T>(e:[Enum](../enum "Enum - An abstract type that represents an Enum type.")<T>, index:[Int](../int "Int - The standard Int type."), ?params:[Array](../array "Array")<[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):T`
Creates an instance of enum `e` by calling its constructor number `index` with arguments `params`.
The constructor indices are preserved from Haxe syntax, so the first declared is index 0, the next index 1 etc.
If `e` or `index` is `null`, or if enum `e` has no constructor corresponding to index `index`, or if the number of elements in `params` does not match the expected number of constructor arguments, or if any argument has an invalid type, the result is unspecified.
### `staticinline[createByName](#createByName)<T>(e:[Enum](../enum "Enum - An abstract type that represents an Enum type.")<T>, constr:[String](../string "String - The basic String class."), ?params:[Array](../array "Array")<[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>):T`
Creates an instance of enum `e` by calling its constructor `constr` with arguments `params`.
If `e` or `constr` is `null`, or if enum `e` has no constructor named `constr`, or if the number of elements in `params` does not match the expected number of constructor arguments, or if any argument has an invalid type, the result is unspecified.
### `staticinline[getConstructors](#getConstructors)<T>(e:[Enum](../enum "Enum - An abstract type that represents an Enum type.")<T>):[Array](../array "Array")<[String](../string "String - The basic String class.")>`
Returns a list of the names of all constructors of enum `e`.
The order of the constructor names in the returned Array is preserved from the original syntax.
If `c` is `null`, the result is unspecified.
### `staticinline[getName](#getName)<T>(e:[Enum](../enum "Enum - An abstract type that represents an Enum type.")<T>):[String](../string "String - The basic String class.")`
Returns the name of enum `e`, including its path.
If `e` is inside a package, the package structure is returned dot- separated, with another dot separating the enum name:
```
pack1.pack2.(...).packN.EnumName
```
If `e` is a sub-type of a Haxe module, that module is not part of the package structure.
If `e` has no package, the enum name is returned.
If `e` is `null`, the result is unspecified.
The enum name does not include any type parameters.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/EnumTools.htmlEnumValueTools
===============
package [haxe](index)
import [haxe.EnumTools](enumtools)
*Available on all platforms*
This class provides advanced methods on enum values. It is ideally used with `using [EnumValueTools](enumvaluetools#EnumValueTools)` and then acts as an [extension](https://haxe.org/manual/lf-static-extension.html) to the `[EnumValue](../enumvalue)` types.
If the first argument to any of the methods is `null`, the result is unspecified.
Static methods
--------------
### `staticinline[equals](#equals)<T>(a:T, b:T):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
Recursively compares two enum instances `a` and `b` by value.
Unlike `a == b`, this function performs a deep equality check on the arguments of the constructors (if there are any).
If `a` or `b` are `null`, the result is unspecified.
### `staticinline[getIndex](#getIndex)(e:[EnumValue](../enumvalue "EnumValue - An abstract type that represents any enum value.")):[Int](../int "Int - The standard Int type.")`
Returns the index of enum instance `e`.
This corresponds to the original syntactic position of `e`. The index of the first declared constructor is 0, the next one is 1 etc.
If `e` is `null`, the result is unspecified.
### `staticinline[getName](#getName)(e:[EnumValue](../enumvalue "EnumValue - An abstract type that represents any enum value.")):[String](../string "String - The basic String class.")`
Returns the constructor name of enum instance `e`.
The result String does not contain any constructor arguments.
If `e` is `null`, the result is unspecified.
### `staticinline[getParameters](#getParameters)(e:[EnumValue](../enumvalue "EnumValue - An abstract type that represents any enum value.")):[Array](../array "Array")<[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
Returns a list of the constructor arguments of enum instance `e`.
If `e` has no arguments, the result is `[]`.
Otherwise the result are the values that were used as arguments to `e`, in the order of their declaration.
If `e` is `null`, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/EnumValueTools.htmlException
==========
package [haxe](index)
extends NativeException
extended by [Error](macro/error "haxe.macro.Error - This error can be used to handle or produce compilation errors in macros."), [ValueException](valueexception "haxe.ValueException - An exception containing arbitrary value.")
*Available on all platforms*
Base class for exceptions.
If this class (or derivatives) is used to catch an exception, then `[haxe.CallStack.exceptionStack](callstack#exceptionStack)()` will not return a stack for the exception caught. Use `[haxe.Exception.stack](exception#stack)` property instead:
```
try {
throwSomething();
} catch(e:Exception) {
trace(e.stack);
}
```
Custom exceptions should extend this class:
```
class MyException extends haxe.Exception {}
//...
throw new MyException('terrible exception');
```
`[haxe.Exception](exception#Exception)` is also a wildcard type to catch any exception:
```
try {
throw 'Catch me!';
} catch(e:haxe.Exception) {
trace(e.message); // Output: Catch me!
}
```
To rethrow an exception just throw it again. Haxe will try to rethrow an original native exception whenever possible.
```
try {
var a:Array<Int> = null;
a.push(1); // generates target-specific null-pointer exception
} catch(e:haxe.Exception) {
throw e; // rethrows native exception instead of haxe.Exception
}
```
Static methods
--------------
### `static[caught](#caught)(value:[Any](../any "Any - Any is a type that is compatible with any other in both ways.")):[Exception](exception "haxe.Exception - Base class for exceptions.")`
*Available on cs*
### `static[thrown](#thrown)(value:[Any](../any "Any - Any is a type that is compatible with any other in both ways.")):[Any](../any "Any - Any is a type that is compatible with any other in both ways.")`
*Available on cs*
Constructor
-----------
### `[new](#new)(message:[String](../string "String - The basic String class."), ?previous:[Exception](exception "haxe.Exception - Base class for exceptions."), ?native:[Any](../any "Any - Any is a type that is compatible with any other in both ways."))`
Create a new Exception instance.
The `previous` argument could be used for exception chaining.
The `native` argument is for internal usage only. There is no need to provide `native` argument manually and no need to keep it upon extending `[haxe.Exception](exception#Exception)` unless you know what you're doing.
Variables
---------
### `read only[message](#message):[String](../string "String - The basic String class.")`
Exception message.
### `read only[native](#native):[Any](../any "Any - Any is a type that is compatible with any other in both ways.")`
Native exception, which caused this exception.
### `read only[previous](#previous):[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Exception](exception "haxe.Exception - Base class for exceptions.")>`
Contains an exception, which was passed to `previous` constructor argument.
### `read only[stack](#stack):[CallStack](callstack "haxe.CallStack - Get information about the call stack.")`
The call stack at the moment of the exception creation.
Methods
-------
### `[details](#details)():[String](../string "String - The basic String class.")`
Detailed exception description.
Includes message, stack and the chain of previous exceptions (if set).
### `[toString](#toString)():[String](../string "String - The basic String class.")`
Returns exception message.
### `[unwrap](#unwrap)():[Any](../any "Any - Any is a type that is compatible with any other in both ways.")`
*Available on cs*
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Exception.htmlLog
====
package [haxe](index)
*Available on all platforms*
Log primarily provides the `trace()` method, which is invoked upon a call to `trace()` in Haxe code.
Static methods
--------------
### `staticdynamic[clear](#clear)():[Void](../void "Void - The standard Void type.")`
*Available on flash*
Clears the trace output.
### `static[formatOutput](#formatOutput)(v:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), infos:[PosInfos](posinfos "haxe.PosInfos - PosInfos is a magic type which can be used to generate position information into the output for debugging use.")):[String](../string "String - The basic String class.")`
Format the output of `trace` before printing it.
### `staticdynamic[setColor](#setColor)(rgb:[Int](../int "Int - The standard Int type.")):[Void](../void "Void - The standard Void type.")`
*Available on flash*
Sets the color of the trace output to `rgb`.
### `staticdynamic[trace](#trace)(v:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), ?infos:[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[PosInfos](posinfos "haxe.PosInfos - PosInfos is a magic type which can be used to generate position information into the output for debugging use.")>):[Void](../void "Void - The standard Void type.")`
Outputs `v` in a platform-dependent way.
The second parameter `infos` is injected by the compiler and contains information about the position where the `trace()` call was made.
This method can be rebound to a custom function:
```
var oldTrace = haxe.Log.trace; // store old function
haxe.Log.trace = function(v, ?infos) {
// handle trace
}
...
haxe.Log.trace = oldTrace;
```
If it is bound to null, subsequent calls to `trace()` will cause an exception.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Log.htmlFunction([Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."))
================================================================================================================
package [haxe](index)
import [haxe.Constraints](constraints)
*Available on all platforms*
This type unifies with any function type.
It is intended to be used as a type parameter constraint. If used as a real type, the underlying type will be `[Dynamic](../dynamic)`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Function.htmlHttp
=====
package [haxe](index)
*Available on all platforms*
#### lua, php, cs, java, cpp, neko, hl, python, macro
*alias for* `[sys.Http](https://api.haxe.org/sys/Http.html "sys.Http")`
#### flash
*alias for* `[haxe.HttpFlash](httpflash "haxe.HttpFlash")`
#### js
*alias for* `[haxe.http.HttpJs](http/httpjs "haxe.http.HttpJs")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Http.htmlHttpFlash
==========
package [haxe](index)
extends [HttpBase](http/httpbase "haxe.http.HttpBase - This class can be used to handle Http requests consistently across platforms.")
import [haxe.Http](http)
*Available on flash*
Constructor
-----------
### `[new](#new)(url:[String](../string "String - The basic String class."))`
Methods
-------
### `[cancel](#cancel)():[Void](../void "Void - The standard Void type.")`
Cancels `this` Http request if `request` has been called and a response has not yet been received.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/HttpFlash.htmlIMap<K, V>
===========
package [haxe](index)
import [haxe.Constraints](constraints)
extended by [BalancedTree](ds/balancedtree "haxe.ds.BalancedTree - BalancedTree allows key-value mapping with arbitrary keys, as long as they can be ordered."), [EnumValueMap](ds/enumvaluemap "haxe.ds.EnumValueMap - EnumValueMap allows mapping of enum value keys to arbitrary values."), [IntMap](ds/intmap "haxe.ds.IntMap - IntMap allows mapping of Int keys to arbitrary values."), [ObjectMap](ds/objectmap "haxe.ds.ObjectMap - ObjectMap allows mapping of object keys to arbitrary values."), [StringMap](ds/stringmap "haxe.ds.StringMap - StringMap allows mapping of String keys to arbitrary values."), [UnsafeStringMap](ds/unsafestringmap "haxe.ds.UnsafeStringMap - This is similar to StringMap excepts that it does not sanitize the keys."), [WeakMap](ds/weakmap "haxe.ds.WeakMap - WeakMap allows mapping of object keys to arbitrary values.")
*Available on all platforms*
Methods
-------
### `[clear](#clear)():[Void](../void "Void - The standard Void type.")`
### `[copy](#copy)():[IMap](imap "haxe.IMap")<K, V>`
### `[exists](#exists)(k:K):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[get](#get)(k:K):[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<V>`
### `[iterator](#iterator)():[Iterator](../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<V>`
### `[keyValueIterator](#keyValueIterator)():[KeyValueIterator](../keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<K, V>`
### `[keys](#keys)():[Iterator](../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<K>`
### `[remove](#remove)(k:K):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[set](#set)(k:K, v:V):[Void](../void "Void - The standard Void type.")`
### `[toString](#toString)():[String](../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/IMap.htmlInt32([Int](../int "Int - The standard Int type."))
====================================================
package [haxe](index)
from [Int](../int "Int - The standard Int type.") to [Int](../int "Int - The standard Int type."),
*Available on all platforms*
Int32 provides a 32-bit integer with consistent overflow behavior across all platforms.
Static methods
--------------
### `static[ucompare](#ucompare)(a:[Int32](int32 "haxe.Int32 - Int32 provides a 32-bit integer with consistent overflow behavior across all platforms."), b:[Int32](int32 "haxe.Int32 - Int32 provides a 32-bit integer with consistent overflow behavior across all platforms.")):[Int](../int "Int - The standard Int type.")`
Compare `a` and `b` in unsigned mode.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Int32.htmlInt64(__Int64)
=================
package [haxe](index)
from __Int64, to __Int64
*Available on all platforms*
A cross-platform signed 64-bit integer. Int64 instances can be created from two 32-bit words using `[Int64.make](int64#make)()`.
Static methods
--------------
### `staticinline[add](#add)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns the sum of `a` and `b`.
### `staticinline[and](#and)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns the bitwise AND of `a` and `b`.
### `staticinline[compare](#compare)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int](../int "Int - The standard Int type.")`
Compares `a` and `b` in signed mode. Returns a negative value if `a < b`, positive if `a > b`, or 0 if `a == b`.
### `staticinline[div](#div)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns the quotient of `a` divided by `b`.
### `static[divMod](#divMod)(dividend:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), divisor:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):{quotient:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), modulus:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")}`
Performs signed integer divison of `dividend` by `divisor`. Returns `{ quotient : [Int64](int64#Int64), modulus : [Int64](int64#Int64) }`.
### `staticinline[eq](#eq)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns `[true](../bool)` if `a` is equal to `b`.
### `static[fromFloat](#fromFloat)(f:[Float](../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
### `staticinline[getHigh](#getHigh)(x:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int32](int32 "haxe.Int32 - Int32 provides a 32-bit integer with consistent overflow behavior across all platforms.")`
Returns the high 32-bit word of `x`.
### `staticinline[getLow](#getLow)(x:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int32](int32 "haxe.Int32 - Int32 provides a 32-bit integer with consistent overflow behavior across all platforms.")`
Returns the low 32-bit word of `x`.
### `staticinline[is](#is)(val:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
**Deprecated:** "haxe.Int64.is() is deprecated. Use haxe.Int64.isInt64() instead"
### `staticinline[isInt64](#isInt64)(val:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns whether the value `val` is of type `[haxe.Int64](int64#Int64)`
### `staticinline[isNeg](#isNeg)(x:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns `[true](../bool)` if `x` is less than zero.
### `staticinline[isZero](#isZero)(x:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns `[true](../bool)` if `x` is exactly zero.
### `staticinline[make](#make)(high:[Int32](int32 "haxe.Int32 - Int32 provides a 32-bit integer with consistent overflow behavior across all platforms."), low:[Int32](int32 "haxe.Int32 - Int32 provides a 32-bit integer with consistent overflow behavior across all platforms.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Construct an Int64 from two 32-bit words `high` and `low`.
### `staticinline[mod](#mod)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns the modulus of `a` divided by `b`.
### `static[mul](#mul)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns the product of `a` and `b`.
### `static[neg](#neg)(x:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns the negative of `x`.
### `staticinline[neq](#neq)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns `[true](../bool)` if `a` is not equal to `b`.
### `staticinline[ofInt](#ofInt)(x:[Int](../int "Int - The standard Int type.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns an Int64 with the value of the Int `x`. `x` is sign-extended to fill 64 bits.
### `staticinline[or](#or)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns the bitwise OR of `a` and `b`.
### `static[parseString](#parseString)(sParam:[String](../string "String - The basic String class.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
### `staticinline[shl](#shl)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int](../int "Int - The standard Int type.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns `a` left-shifted by `b` bits.
### `staticinline[shr](#shr)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int](../int "Int - The standard Int type.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns `a` right-shifted by `b` bits in signed mode. `a` is sign-extended.
### `staticinline[sub](#sub)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns `a` minus `b`.
### `staticinline[toInt](#toInt)(x:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int](../int "Int - The standard Int type.")`
Returns an Int with the value of the Int64 `x`. Throws an exception if `x` cannot be represented in 32 bits.
### `staticinline[toStr](#toStr)(x:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[String](../string "String - The basic String class.")`
Returns a signed decimal `[String](../string)` representation of `x`.
### `staticinline[ucompare](#ucompare)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int](../int "Int - The standard Int type.")`
Compares `a` and `b` in unsigned mode. Returns a negative value if `a < b`, positive if `a > b`, or 0 if `a == b`.
### `staticinline[ushr](#ushr)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int](../int "Int - The standard Int type.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns `a` right-shifted by `b` bits in unsigned mode. `a` is padded with zeroes.
### `staticinline[xor](#xor)(a:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer."), b:[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns the bitwise XOR of `a` and `b`.
Variables
---------
### `read only[high](#high):[Int32](int32 "haxe.Int32 - Int32 provides a 32-bit integer with consistent overflow behavior across all platforms.")`
### `read only[low](#low):[Int32](int32 "haxe.Int32 - Int32 provides a 32-bit integer with consistent overflow behavior across all platforms.")`
Methods
-------
### `inline[copy](#copy)():[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Makes a copy of `this` Int64.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Int64.htmlInt64Helper
============
package [haxe](index)
*Available on all platforms*
Helper for parsing to `[Int64](int64#Int64)` instances.
Static methods
--------------
### `static[fromFloat](#fromFloat)(f:[Float](../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Create `[Int64](int64#Int64)` from given float.
### `static[parseString](#parseString)(sParam:[String](../string "String - The basic String class.")):[Int64](int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Create `[Int64](int64#Int64)` from given string.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Int64Helper.htmlJson
=====
package [haxe](index)
*Available on all platforms*
Cross-platform JSON API: it will automatically use the optimized native API if available. Use `-D haxeJSON` to force usage of the Haxe implementation even if a native API is found: This will provide extra encoding features such as enums (replaced by their index) and StringMaps.
See also:
* <https://haxe.org/manual/std-Json.htmlStatic methods
--------------
### `static[parse](#parse)(text:[String](../string "String - The basic String class.")):[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
Parses given JSON-encoded `text` and returns the resulting object.
JSON objects are parsed into anonymous structures and JSON arrays are parsed into `[Array](../array)<[Dynamic](../dynamic)>`.
If given `text` is not valid JSON, an exception will be thrown.
See also:
* <https://haxe.org/manual/std-Json-parsing.html### `static[stringify](#stringify)(value:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), ?replacer:(key:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), value:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")) ‑> [Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), ?space:[String](../string "String - The basic String class.")):[String](../string "String - The basic String class.")`
Encodes the given `value` and returns the resulting JSON string.
If `replacer` is given and is not null, it is used to retrieve the actual object to be encoded. The `replacer` function takes two parameters, the key and the value being encoded. Initial key value is an empty string.
If `space` is given and is not null, the result will be pretty-printed. Successive levels will be indented by this string.
See also:
* <https://haxe.org/manual/std-Json-encoding.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Json.htmlMainEvent
==========
package [haxe](index)
import [haxe.MainLoop](mainloop)
*Available on all platforms*
Variables
---------
### `[isBlocking](#isBlocking):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.") = true`
Tells if the event can lock the process from exiting (default:true)
### `read only[nextRun](#nextRun):[Float](../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
### `read only[priority](#priority):[Int](../int "Int - The standard Int type.")`
Methods
-------
### `inline[call](#call)():[Void](../void "Void - The standard Void type.")`
Call the event. Will do nothing if the event has been stopped.
### `[delay](#delay)(t:[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Float](../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")>):[Void](../void "Void - The standard Void type.")`
Delay the execution of the event for the given time, in seconds. If t is null, the event will be run at tick() time.
### `[stop](#stop)():[Void](../void "Void - The standard Void type.")`
Stop the event from firing anymore.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/MainEvent.htmlNotVoid([Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."))
===============================================================================================================
package [haxe](index)
import [haxe.Constraints](constraints)
*Available on all platforms*
This type unifies with anything but `[Void](../void)`.
It is intended to be used as a type parameter constraint. If used as a real type, the underlying type will be `[Dynamic](../dynamic)`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/NotVoid.htmlSerializer
===========
package [haxe](index)
*Available on all platforms*
The Serializer class can be used to encode values and objects into a `[String](../string)`, from which the `[Unserializer](unserializer#Unserializer)` class can recreate the original representation.
This class can be used in two ways:
* create a `new [Serializer](serializer#Serializer)()` instance, call its `serialize()` method with any argument and finally retrieve the String representation from `toString()`
* call `[Serializer.run](serializer#run)()` to obtain the serialized representation of a single argument
Serialization is guaranteed to work for all haxe-defined classes, but may or may not work for instances of external/native classes.
The specification of the serialization format can be found here: <https://haxe.org/manual/std-serialization-format.htmlStatic variables
----------------
### `static[USE_CACHE](#USE_CACHE):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.") = false`
If the values you are serializing can contain circular references or objects repetitions, you should set `USE_CACHE` to true to prevent infinite loops.
This may also reduce the size of serialization Strings at the expense of performance.
This value can be changed for individual instances of `[Serializer](serializer#Serializer)` by setting their `useCache` field.
### `static[USE_ENUM_INDEX](#USE_ENUM_INDEX):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.") = false`
Use constructor indexes for enums instead of names.
This may reduce the size of serialization Strings, but makes them less suited for long-term storage: If constructors are removed or added from the enum, the indices may no longer match.
This value can be changed for individual instances of `[Serializer](serializer#Serializer)` by setting their `useEnumIndex` field.
Static methods
--------------
### `static[run](#run)(v:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[String](../string "String - The basic String class.")`
Serializes `v` and returns the String representation.
This is a convenience function for creating a new instance of Serializer, serialize `v` into it and obtain the result through a call to `toString()`.
Constructor
-----------
### `[new](#new)()`
Creates a new Serializer instance.
Subsequent calls to `this.[serialize](#serialize)` will append values to the internal buffer of this String. Once complete, the contents can be retrieved through a call to `this.[toString](#toString)`.
Each `[Serializer](serializer#Serializer)` instance maintains its own cache if `this.[useCache](#useCache)` is `[true](../bool)`.
Variables
---------
### `[useCache](#useCache):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
The individual cache setting for `this` Serializer instance.
See `USE_CACHE` for a complete description.
### `[useEnumIndex](#useEnumIndex):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
The individual enum index setting for `this` Serializer instance.
See `USE_ENUM_INDEX` for a complete description.
Methods
-------
### `[serialize](#serialize)(v:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Void](../void "Void - The standard Void type.")`
Serializes `v`.
All haxe-defined values and objects with the exception of functions can be serialized. Serialization of external/native objects is not guaranteed to work.
The values of `this.[useCache](#useCache)` and `this.[useEnumIndex](#useEnumIndex)` may affect serialization output.
### `[serializeException](#serializeException)(e:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Void](../void "Void - The standard Void type.")`
### `[toString](#toString)():[String](../string "String - The basic String class.")`
Return the String representation of `this` Serializer.
The exact format specification can be found here: https://haxe.org/manual/serialization/format
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Serializer.htmlUnserializer
=============
package [haxe](index)
*Available on all platforms*
The `[Unserializer](unserializer#Unserializer)` class is the complement to the `[Serializer](serializer#Serializer)` class. It parses a serialization `[String](../string)` and creates objects from the contained data.
This class can be used in two ways:
* create a `new [Unserializer](unserializer#Unserializer)()` instance with a given serialization String, then call its `unserialize()` method until all values are extracted
* call `[Unserializer.run](unserializer#run)()` to unserialize a single value from a given String
The specification of the serialization format can be found here: <https://haxe.org/manual/serialization/formatStatic variables
----------------
### `static[DEFAULT_RESOLVER](#DEFAULT_RESOLVER):TypeResolver = new DefaultResolver()`
This value can be set to use custom type resolvers.
A type resolver finds a `[Class](../class)` or `[Enum](../enum)` instance from a given `[String](../string)`. By default, the Haxe `[Type](../type)` Api is used.
A type resolver must provide two methods:
1. `resolveClass(name:[String](../string)):[Class](../class)<[Dynamic](../dynamic)>` is called to determine a
```
`Class` from a class name
```
2. `resolveEnum(name:[String](../string)):[Enum](../enum)<[Dynamic](../dynamic)>` is called to determine an
```
`Enum` from an enum name
```
This value is applied when a new `[Unserializer](unserializer#Unserializer)` instance is created. Changing it afterwards has no effect on previously created instances.
Static methods
--------------
### `static[run](#run)(v:[String](../string "String - The basic String class.")):[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
Unserializes `v` and returns the according value.
This is a convenience function for creating a new instance of Unserializer with `v` as buffer and calling its `unserialize()` method once.
Constructor
-----------
### `[new](#new)(buf:[String](../string "String - The basic String class."))`
Creates a new Unserializer instance, with its internal buffer initialized to `buf`.
This does not parse `buf` immediately. It is parsed only when calls to `this.[unserialize](#unserialize)` are made.
Each Unserializer instance maintains its own cache.
Methods
-------
### `[getResolver](#getResolver)():TypeResolver`
Gets the type resolver of `this` Unserializer instance.
See `DEFAULT_RESOLVER` for more information on type resolvers.
### `[setResolver](#setResolver)(r:TypeResolver):[Void](../void "Void - The standard Void type.")`
Sets the type resolver of `this` Unserializer instance to `r`.
If `r` is `null`, a special resolver is used which returns `null` for all input values.
See `DEFAULT_RESOLVER` for more information on type resolvers.
### `[unserialize](#unserialize)():[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
Unserializes the next part of `this` Unserializer instance and returns the according value.
This function may call `this.[resolver](#resolver).resolveClass` to determine a Class from a String, and `this.[resolver](#resolver).resolveEnum` to determine an Enum from a String.
If `this` Unserializer instance contains no more or invalid data, an exception is thrown.
This operation may fail on structurally valid data if a type cannot be resolved or if a field cannot be set. This can happen when unserializing Strings that were serialized on a different Haxe target, in which the serialization side has to make sure not to include platform-specific data.
Classes are created from `[Type.createEmptyInstance](../type#createEmptyInstance)`, which means their constructors are not called.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Unserializer.htmlStackItem
==========
package [haxe](index)
import [haxe.CallStack](callstack)
*Available on all platforms*
Elements return by `[CallStack](callstack#CallStack)` methods.
Values
------
### `CFunction`
### `Module(m:[String](../string "String - The basic String class."))`
### `FilePos(s:[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[StackItem](stackitem "haxe.StackItem - Elements return by CallStack methods.")>, file:[String](../string "String - The basic String class."), line:[Int](../int "Int - The standard Int type."), column:[Int](../int "Int - The standard Int type."))`
### `Method(classname:[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../string "String - The basic String class.")>, method:[String](../string "String - The basic String class."))`
### `LocalFunction(v:[Int](../int "Int - The standard Int type."))`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/StackItem.htmlSysTools
=========
package [haxe](index)
*Available on all platforms*
Static variables
----------------
### `staticfinalread only[winMetaCharacters](#winMetaCharacters):[ReadOnlyArray](ds/readonlyarray "haxe.ds.ReadOnlyArray - ReadOnlyArray is an abstract over an ordinary Array which only exposes APIs that don't modify the instance, hence \"read-only\".")<[Int](../int "Int - The standard Int type.")> = [" ".code, "(".code, ")".code, "%".code, "!".code, "^".code, "\"".code, "<".code, ">".code, "&".code, "|".code, "\n".code, "\r".code, ",".code, ";".code]`
Character codes of the characters that will be escaped by `quoteWinArg(_, [true](../bool))`.
Static methods
--------------
### `static[quoteUnixArg](#quoteUnixArg)(argument:[String](../string "String - The basic String class.")):[String](../string "String - The basic String class.")`
Returns a String that can be used as a single command line argument on Unix. The input will be quoted, or escaped if necessary.
### `static[quoteWinArg](#quoteWinArg)(argument:[String](../string "String - The basic String class."), escapeMetaCharacters:[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")):[String](../string "String - The basic String class.")`
Returns a String that can be used as a single command line argument on Windows. The input will be quoted, or escaped if necessary, such that the output will be parsed as a single argument using the rule specified in http://msdn.microsoft.com/en-us/library/ms880421
Examples:
```
quoteWinArg("abc") == "abc";
quoteWinArg("ab c") == '"ab c"';
```
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/SysTools.htmlTemplate
=========
package [haxe](index)
*Available on all platforms*
`[Template](template#Template)` provides a basic templating mechanism to replace values in a source String, and to have some basic logic.
A complete documentation of the supported syntax is available at: <https://haxe.org/manual/std-template.htmlStatic variables
----------------
### `static[globals](#globals):[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.") = { }`
Global replacements which are used across all `[Template](template#Template)` instances. This has lower priority than the context argument of `execute()`.
Constructor
-----------
### `[new](#new)(str:[String](../string "String - The basic String class."))`
Creates a new `[Template](template#Template)` instance from `str`.
`str` is parsed into tokens, which are stored for internal use. This means that multiple `execute()` operations on a single `[Template](template#Template)` instance are more efficient than one `execute()` operations on multiple `[Template](template#Template)` instances.
If `str` is `null`, the result is unspecified.
Methods
-------
### `[execute](#execute)(context:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), ?macros:[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[String](../string "String - The basic String class.")`
Executes `this` `[Template](template#Template)`, taking into account `context` for replacements and `macros` for callback functions.
If `context` has a field `name`, its value replaces all occurrences of `::name::` in the `[Template](template#Template)`. Otherwise `[Template.globals](template#globals)` is checked instead, If `name` is not a field of that either, `::name::` is replaced with `null`.
If `macros` has a field `name`, all occurrences of `$$name(args)` are replaced with the result of calling that field. The first argument is always the `resolve()` method, followed by the given arguments. If `macros` has no such field, the result is unspecified.
If `context` is `null`, the result is unspecified. If `macros` is `null`, no macros are used.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Template.htmlTimer
======
package [haxe](index)
*Available on all platforms*
The `[Timer](timer#Timer)` class allows you to create asynchronous timers on platforms that support events.
The intended usage is to create an instance of the `[Timer](timer#Timer)` class with a given interval, set its `run()` method to a custom function to be invoked and eventually call `stop()` to stop the `[Timer](timer#Timer)`.
Note that a running `[Timer](timer#Timer)` may or may not prevent the program to exit automatically when `main()` returns.
It is also possible to extend this class and override its `run()` method in the child class.
Static methods
--------------
### `static[delay](#delay)(f:() ‑> [Void](../void "Void - The standard Void type."), time_ms:[Int](../int "Int - The standard Int type.")):[Timer](timer "haxe.Timer - The Timer class allows you to create asynchronous timers on platforms that support events.")`
Invokes `f` after `time_ms` milliseconds.
This is a convenience function for creating a new Timer instance with `time_ms` as argument, binding its `run()` method to `f` and then stopping `this` Timer upon the first invocation.
If `f` is `null`, the result is unspecified.
### `static[measure](#measure)<T>(f:() ‑> T, ?pos:[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[PosInfos](posinfos "haxe.PosInfos - PosInfos is a magic type which can be used to generate position information into the output for debugging use.")>):T`
Measures the time it takes to execute `f`, in seconds with fractions.
This is a convenience function for calculating the difference between `[Timer.stamp](timer#stamp)()` before and after the invocation of `f`.
The difference is passed as argument to `[Log.trace](log#trace)()`, with `"s"` appended to denote the unit. The optional `pos` argument is passed through.
If `f` is `null`, the result is unspecified.
### `staticinline[stamp](#stamp)():[Float](../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns a timestamp, in seconds with fractions.
The value itself might differ depending on platforms, only differences between two values make sense.
Constructor
-----------
### `[new](#new)(time_ms:[Int](../int "Int - The standard Int type."))`
Creates a new timer that will run every `time_ms` milliseconds.
After creating the Timer instance, it calls `this.[run](#run)` repeatedly, with delays of `time_ms` milliseconds, until `this.[stop](#stop)` is called.
The first invocation occurs after `time_ms` milliseconds, not immediately.
The accuracy of this may be platform-dependent.
Methods
-------
### `dynamic[run](#run)():[Void](../void "Void - The standard Void type.")`
This method is invoked repeatedly on `this` Timer.
It can be overridden in a subclass, or rebound directly to a custom function:
```
var timer = new haxe.Timer(1000); // 1000ms delay
timer.run = function() { ... }
```
Once bound, it can still be rebound to different functions until `this` Timer is stopped through a call to `this.[stop](#stop)`.
### `[stop](#stop)():[Void](../void "Void - The standard Void type.")`
Stops `this` Timer.
After calling this method, no additional invocations of `this.[run](#run)` will occur.
It is not possible to restart `this` Timer once stopped.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Timer.htmlPosInfos
=========
package [haxe](index)
*Available on all platforms*
`[PosInfos](posinfos#PosInfos)` is a magic type which can be used to generate position information into the output for debugging use.
If a function has a final optional argument of this type, i.e. `(..., ?pos:[haxe.PosInfos](posinfos#PosInfos))`, each call to that function which does not assign a value to that argument has its position added as call argument.
This can be used to track positions of calls in e.g. a unit testing framework.
Fields
------
### `[methodName](#methodName):[String](../string "String - The basic String class.")`
### `[lineNumber](#lineNumber):[Int](../int "Int - The standard Int type.")`
### `[fileName](#fileName):[String](../string "String - The basic String class.")`
### `optional[customParams](#customParams):[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../array "Array")<[Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
### `[className](#className):[String](../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/PosInfos.htmlResource
=========
package [haxe](index)
*Available on all platforms*
Resource can be used to access resources that were added through the `--resource file@name` command line parameter.
Depending on their type they can be obtained as `[String](../string)` through `getString(name)`, or as binary data through `getBytes(name)`.
A list of all available resource names can be obtained from `listNames()`.
Static methods
--------------
### `static[getBytes](#getBytes)(name:[String](../string "String - The basic String class.")):[Bytes](io/bytes "haxe.io.Bytes")`
Retrieves the resource identified by `name` as an instance of haxe.io.Bytes.
If `name` does not match any resource name, `null` is returned.
### `static[getString](#getString)(name:[String](../string "String - The basic String class.")):[String](../string "String - The basic String class.")`
Retrieves the resource identified by `name` as a `[String](../string)`.
If `name` does not match any resource name, `null` is returned.
### `static[listNames](#listNames)():[Array](../array "Array")<[String](../string "String - The basic String class.")>`
Lists all available resource names. The resource name is the name part of the `--resource file@name` command line parameter.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Resource.htmlUcs2([String](../string "String - The basic String class."))
=============================================================
package [haxe](index)
*Available on all platforms*
Cross platform UCS2 string API.
Static methods
--------------
### `staticinline[fromCharCode](#fromCharCode)(code:[Int](../int "Int - The standard Int type.")):[Ucs2](ucs2 "haxe.Ucs2 - Cross platform UCS2 string API.")`
Returns the Ucs2 corresponding to the character code `code`.
If `code` is negative or has another invalid value, the result is unspecified.
Variables
---------
### `read only[length](#length):[Int](../int "Int - The standard Int type.")`
Methods
-------
### `inline[charAt](#charAt)(index:[Int](../int "Int - The standard Int type.")):[Ucs2](ucs2 "haxe.Ucs2 - Cross platform UCS2 string API.")`
Returns the character at position `index` of `this` Ucs2.
If `index` is negative or exceeds `this.[length](#length)`, the empty Ucs2 "" is returned.
### `inline[charCodeAt](#charCodeAt)(index:[Int](../int "Int - The standard Int type.")):[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Int](../int "Int - The standard Int type.")>`
Returns the character code at position `index` of `this` Ucs2.
If `index` is negative or exceeds `this.[length](#length)`, `null` is returned.
To obtain the character code of a single character, `"x".code` can be used instead to extern public inline the character code at compile time. Note that this only works on Ucs2 literals of length 1.
### `inline[indexOf](#indexOf)(str:[Ucs2](ucs2 "haxe.Ucs2 - Cross platform UCS2 string API."), ?startIndex:[Int](../int "Int - The standard Int type.")):[Int](../int "Int - The standard Int type.")`
Returns the position of the leftmost occurrence of `str` within `this` Ucs2.
If `startIndex` is given, the search is performed within the substring of `this` Ucs2 starting from `startIndex`. Otherwise the search is performed within `this` Ucs2. In either case, the returned position is relative to the beginning of `this` Ucs2.
If `str` cannot be found, -1 is returned.
### `inline[lastIndexOf](#lastIndexOf)(str:[Ucs2](ucs2 "haxe.Ucs2 - Cross platform UCS2 string API."), ?startIndex:[Int](../int "Int - The standard Int type.")):[Int](../int "Int - The standard Int type.")`
Returns the position of the rightmost occurrence of `str` within `this` Ucs2.
If `startIndex` is given, the search is performed within the substring of `this` Ucs2 from 0 to `startIndex`. Otherwise the search is performed within `this` Ucs2. In either case, the returned position is relative to the beginning of `this` Ucs2.
If `str` cannot be found, -1 is returned.
### `inline[split](#split)(delimiter:[Ucs2](ucs2 "haxe.Ucs2 - Cross platform UCS2 string API.")):[Array](../array "Array")<[Ucs2](ucs2 "haxe.Ucs2 - Cross platform UCS2 string API.")>`
Splits `this` Ucs2 at each occurrence of `delimiter`.
If `this` Ucs2 is the empty Ucs2 "", the result is not consistent across targets and may either be `[]` (on Js, Cpp) or `[""]`.
If `delimiter` is the empty Ucs2 "", `this` Ucs2 is split into an Array of `this.[length](#length)` elements, where the elements correspond to the characters of `this` Ucs2.
If `delimiter` is not found within `this` Ucs2, the result is an Array with one element, which equals `this` Ucs2.
If `delimiter` is null, the result is unspecified.
Otherwise, `this` Ucs2 is split into parts at each occurrence of `delimiter`. If `this` Ucs2 starts (or ends) with `delimiter`, the result Array contains a leading (or trailing) empty Ucs2 "" element. Two subsequent delimiters also result in an empty Ucs2 "" element.
### `inline[substr](#substr)(pos:[Int](../int "Int - The standard Int type."), ?len:[Int](../int "Int - The standard Int type.")):[Ucs2](ucs2 "haxe.Ucs2 - Cross platform UCS2 string API.")`
Returns `len` characters of `this` Ucs2, starting at position `pos`.
If `len` is omitted, all characters from position `pos` to the end of `this` Ucs2 are included.
If `pos` is negative, its value is calculated from the end of `this` Ucs2 by `this.[length](#length) + pos`. If this yields a negative value, 0 is used instead.
If the calculated position + `len` exceeds `this.[length](#length)`, the characters from that position to the end of `this` Ucs2 are returned.
If `len` is negative, the result is unspecified.
### `inline[substring](#substring)(startIndex:[Int](../int "Int - The standard Int type."), ?endIndex:[Int](../int "Int - The standard Int type.")):[Ucs2](ucs2 "haxe.Ucs2 - Cross platform UCS2 string API.")`
Returns the part of `this` Ucs2 from `startIndex` to `endIndex`.
If `startIndex` or `endIndex` are negative, 0 is used instead.
If `startIndex` exceeds `endIndex`, they are swapped.
If the (possibly swapped) `endIndex` is omitted or exceeds `this.[length](#length)`, `this.[length](#length)` is used instead.
If the (possibly swapped) `startIndex` exceeds `this.[length](#length)`, the empty Ucs2 "" is returned.
### `inline[toLowerCase](#toLowerCase)():[Ucs2](ucs2 "haxe.Ucs2 - Cross platform UCS2 string API.")`
Returns a Ucs2 where all characters of `this` Ucs2 are lower case.
Affects the characters `A-Z`. Other characters remain unchanged.
### `inline[toNativeString](#toNativeString)():[String](../string "String - The basic String class.")`
Returns the native underlying String.
### `inline[toUpperCase](#toUpperCase)():[Ucs2](ucs2 "haxe.Ucs2 - Cross platform UCS2 string API.")`
Returns a Ucs2 where all characters of `this` Ucs2 are upper case.
Affects the characters `a-z`. Other characters remain unchanged.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Ucs2.htmlUtf8
=====
package [haxe](index)
**Deprecated:** "haxe.Utf8 is deprecated. Use UnicodeString instead."
*Available on all platforms*
Since not all platforms guarantee that `[String](../string)` always uses UTF-8 encoding, you can use this cross-platform API to perform operations on such strings.
Static methods
--------------
### `static[charCodeAt](#charCodeAt)(s:[String](../string "String - The basic String class."), index:[Int](../int "Int - The standard Int type.")):[Int](../int "Int - The standard Int type.")`
Similar to `[String.charCodeAt](../string#charCodeAt)` but uses the UTF8 character position.
### `static[compare](#compare)(a:[String](../string "String - The basic String class."), b:[String](../string "String - The basic String class.")):[Int](../int "Int - The standard Int type.")`
Compare two UTF8 strings, character by character.
### `static[decode](#decode)(s:[String](../string "String - The basic String class.")):[String](../string "String - The basic String class.")`
Decode an UTF8 string back to an ISO string. Throw an exception if a given UTF8 character is not supported by the decoder.
### `static[encode](#encode)(s:[String](../string "String - The basic String class.")):[String](../string "String - The basic String class.")`
Encode the input ISO string into the corresponding UTF8 one.
### `static[iter](#iter)(s:[String](../string "String - The basic String class."), chars:[Int](../int "Int - The standard Int type.") ‑> [Void](../void "Void - The standard Void type.")):[Void](../void "Void - The standard Void type.")`
Call the `chars` function for each UTF8 char of the string.
### `static[length](#length)(s:[String](../string "String - The basic String class.")):[Int](../int "Int - The standard Int type.")`
Returns the number of UTF8 chars of the String.
### `static[sub](#sub)(s:[String](../string "String - The basic String class."), pos:[Int](../int "Int - The standard Int type."), len:[Int](../int "Int - The standard Int type.")):[String](../string "String - The basic String class.")`
This is similar to `[String.substr](../string#substr)` but the `pos` and `len` parts are considering UTF8 characters.
### `static[validate](#validate)(s:[String](../string "String - The basic String class.")):[Bool](../bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if the String is correctly encoded as UTF8.
Constructor
-----------
### `[new](#new)(?size:[Int](../int "Int - The standard Int type."))`
Allocate a new Utf8 buffer using an optional bytes size.
Methods
-------
### `[addChar](#addChar)(c:[Int](../int "Int - The standard Int type.")):[Void](../void "Void - The standard Void type.")`
Add the given UTF8 character code to the buffer.
### `[toString](#toString)():[String](../string "String - The basic String class.")`
Returns the buffer converted to a String.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/Utf8.htmlValueException
===============
package [haxe](index)
extends [Exception](exception "haxe.Exception - Base class for exceptions.")
*Available on all platforms*
An exception containing arbitrary value.
This class is automatically used for throwing values, which don't extend `[haxe.Exception](exception#Exception)` or native exception type. For example:
```
throw "Terrible error";
```
will be compiled to
```
throw new ValueException("Terrible error");
```
Constructor
-----------
### `[new](#new)(value:[Any](../any "Any - Any is a type that is compatible with any other in both ways."), ?previous:[Exception](exception "haxe.Exception - Base class for exceptions."), ?native:[Any](../any "Any - Any is a type that is compatible with any other in both ways."))`
Variables
---------
### `read only[value](#value):[Any](../any "Any - Any is a type that is compatible with any other in both ways.")`
Thrown value.
Methods
-------
### `[unwrap](#unwrap)():[Any](../any "Any - Any is a type that is compatible with any other in both ways.")`
*Available on cs*
Extract an originally thrown value.
This method must return the same value on subsequent calls. Used internally for catching non-native exceptions. Do *not* override unless you know what you are doing.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ValueException.htmlConfig
=======
package [mbedtls](index)
*Available on macro*
Constructor
-----------
### `[new](#new)()`
Methods
-------
### `[authmode](#authmode)(authmode:[SslAuthmode](sslauthmode "mbedtls.SslAuthmode")):[Void](../void "Void - The standard Void type.")`
### `[ca_chain](#ca_chain)(ca_chain:[X509Crt](x509crt "mbedtls.X509Crt")):[Void](../void "Void - The standard Void type.")`
### `[defaults](#defaults)(endpoint:[SslEndpoint](sslendpoint "mbedtls.SslEndpoint"), transport:[SslTransport](ssltransport "mbedtls.SslTransport"), preset:[SslPreset](sslpreset "mbedtls.SslPreset")):[Int](../int "Int - The standard Int type.")`
### `[rng](#rng)<T>(p_rng:T):[Void](../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/Config.htmlCtrDrbg
========
package [mbedtls](index)
*Available on macro*
Constructor
-----------
### `[new](#new)()`
Methods
-------
### `[random](#random)(output:[Bytes](../haxe/io/bytes "haxe.io.Bytes"), output_len:[Int](../int "Int - The standard Int type.")):[Int](../int "Int - The standard Int type.")`
### `[seed](#seed)(entropy:[Entropy](entropy "mbedtls.Entropy"), ?custom:[String](../string "String - The basic String class.")):[Int](../int "Int - The standard Int type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/CtrDrbg.htmlEntropy
========
package [mbedtls](index)
*Available on macro*
Constructor
-----------
### `[new](#new)()`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/Entropy.htmlError
======
package [mbedtls](index)
*Available on macro*
Static methods
--------------
### `static[strerror](#strerror)(code:[Int](../int "Int - The standard Int type.")):[String](../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/Error.htmlPkContext
==========
package [mbedtls](index)
*Available on macro*
Constructor
-----------
### `[new](#new)()`
Methods
-------
### `[parse_key](#parse_key)(key:[Bytes](../haxe/io/bytes "haxe.io.Bytes"), ?pwd:[String](../string "String - The basic String class.")):[Int](../int "Int - The standard Int type.")`
### `[parse_keyfile](#parse_keyfile)(path:[String](../string "String - The basic String class."), ?password:[String](../string "String - The basic String class.")):[Int](../int "Int - The standard Int type.")`
### `[parse_public_key](#parse_public_key)(key:[Bytes](../haxe/io/bytes "haxe.io.Bytes")):[Int](../int "Int - The standard Int type.")`
### `[parse_public_keyfile](#parse_public_keyfile)(path:[String](../string "String - The basic String class.")):[Int](../int "Int - The standard Int type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/PkContext.htmlSsl
====
package [mbedtls](index)
*Available on macro*
Constructor
-----------
### `[new](#new)()`
Methods
-------
### `[get_peer_cert](#get_peer_cert)():[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[X509Crt](x509crt "mbedtls.X509Crt")>`
### `[handshake](#handshake)():[Int](../int "Int - The standard Int type.")`
### `[read](#read)(buf:[Bytes](../haxe/io/bytes "haxe.io.Bytes"), pos:[Int](../int "Int - The standard Int type."), len:[Int](../int "Int - The standard Int type.")):[Int](../int "Int - The standard Int type.")`
### `[set_hostname](#set_hostname)(hostname:[String](../string "String - The basic String class.")):[Int](../int "Int - The standard Int type.")`
### `[setup](#setup)(conf:[Config](config "mbedtls.Config")):[Int](../int "Int - The standard Int type.")`
### `[write](#write)(buf:[Bytes](../haxe/io/bytes "haxe.io.Bytes"), pos:[Int](../int "Int - The standard Int type."), len:[Int](../int "Int - The standard Int type.")):[Int](../int "Int - The standard Int type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/Ssl.htmlSslAuthmode([Int](../int "Int - The standard Int type."))
==========================================================
package [mbedtls](index)
*Available on macro*
Variables
---------
### `read only[SSL_VERIFY_NONE](#SSL_VERIFY_NONE):[SslAuthmode](sslauthmode "mbedtls.SslAuthmode")`
### `read only[SSL_VERIFY_OPTIONAL](#SSL_VERIFY_OPTIONAL):[SslAuthmode](sslauthmode "mbedtls.SslAuthmode")`
### `read only[SSL_VERIFY_REQUIRED](#SSL_VERIFY_REQUIRED):[SslAuthmode](sslauthmode "mbedtls.SslAuthmode")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/SslAuthmode.htmlFlatEnum([Dynamic](../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."))
================================================================================================================
package [haxe](index)
import [haxe.Constraints](constraints)
*Available on all platforms*
This type unifies with an enum instance if all constructors of the enum require no arguments.
It is intended to be used as a type parameter constraint. If used as a real type, the underlying type will be `[Dynamic](../dynamic)`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/FlatEnum.htmlV8CallSite
===========
package [haxe](index)
import [haxe.NativeStackTrace](nativestacktrace)
*Available on js*
Fields
------
### `[getLineNumber](#getLineNumber)():[Int](../int "Int - The standard Int type.")`
### `[getFunctionName](#getFunctionName)():[String](../string "String - The basic String class.")`
### `[getFileName](#getFileName)():[String](../string "String - The basic String class.")`
### `[getColumnNumber](#getColumnNumber)():[Int](../int "Int - The standard Int type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/V8CallSite.htmlSslEndpoint([Int](../int "Int - The standard Int type."))
==========================================================
package [mbedtls](index)
*Available on macro*
Variables
---------
### `read only[SSL_IS_CLIENT](#SSL_IS_CLIENT):[SslEndpoint](sslendpoint "mbedtls.SslEndpoint")`
### `read only[SSL_IS_SERVER](#SSL_IS_SERVER):[SslEndpoint](sslendpoint "mbedtls.SslEndpoint")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/SslEndpoint.htmlSslPreset([Int](../int "Int - The standard Int type."))
========================================================
package [mbedtls](index)
*Available on macro*
Variables
---------
### `read only[SSL_PRESET_DEFAULT](#SSL_PRESET_DEFAULT):[SslPreset](sslpreset "mbedtls.SslPreset")`
### `read only[SSL_PRESET_SUITEB](#SSL_PRESET_SUITEB):[SslPreset](sslpreset "mbedtls.SslPreset")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/SslPreset.htmlSslTransport([Int](../int "Int - The standard Int type."))
===========================================================
package [mbedtls](index)
*Available on macro*
Variables
---------
### `read only[SSL_TRANSPORT_DATAGRAM](#SSL_TRANSPORT_DATAGRAM):[SslTransport](ssltransport "mbedtls.SslTransport")`
### `read only[SSL_TRANSPORT_STREAM](#SSL_TRANSPORT_STREAM):[SslTransport](ssltransport "mbedtls.SslTransport")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/SslTransport.htmlX509Crt
========
package [mbedtls](index)
*Available on macro*
Constructor
-----------
### `[new](#new)()`
Methods
-------
### `[next](#next)():[Null](../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[X509Crt](x509crt "mbedtls.X509Crt")>`
### `[parse](#parse)(buf:[Bytes](../haxe/io/bytes "haxe.io.Bytes")):[Int](../int "Int - The standard Int type.")`
### `[parse_file](#parse_file)(path:[String](../string "String - The basic String class.")):[Int](../int "Int - The standard Int type.")`
### `[parse_path](#parse_path)(path:[String](../string "String - The basic String class.")):[Int](../int "Int - The standard Int type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/mbedtls/X509Crt.htmlArrayIterator<T>
=================
package [haxe.iterators](index)
extended by [ArrayDynIterator](https://api.haxe.org/hl/types/ArrayDynIterator.html "hl.types.ArrayDynIterator"), [ArrayObjIterator](https://api.haxe.org/hl/types/ArrayObjIterator.html "hl.types.ArrayObjIterator"), [BytesIterator](https://api.haxe.org/hl/types/BytesIterator.html "hl.types.BytesIterator")
*Available on all platforms*
This iterator is used only when `[Array](../../array)<T>` is passed to `[Iterable](../../iterable)<T>`
Constructor
-----------
### `[new](#new)(array:[Array](../../array "Array")<T>)`
Create a new `[ArrayIterator](arrayiterator#ArrayIterator)`.
Methods
-------
### `[hasNext](#hasNext)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Iterator.hasNext](../../iterator#hasNext)`
### `[next](#next)():T`
### See `[Iterator.next](../../iterator#next)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/iterators/ArrayIterator.htmlArrayKeyValueIterator<T>
=========================
package [haxe.iterators](index)
extended by [ArrayDynKeyValueIterator](https://api.haxe.org/hl/types/ArrayDynKeyValueIterator.html "hl.types.ArrayDynKeyValueIterator"), [ArrayObjKeyValueIterator](https://api.haxe.org/hl/types/ArrayObjKeyValueIterator.html "hl.types.ArrayObjKeyValueIterator"), [BytesKeyValueIterator](https://api.haxe.org/hl/types/BytesKeyValueIterator.html "hl.types.BytesKeyValueIterator")
*Available on all platforms*
Constructor
-----------
### `[new](#new)(array:[Array](../../array "Array")<T>)`
Methods
-------
### `[hasNext](#hasNext)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[next](#next)():{value:T, key:[Int](../../int "Int - The standard Int type.")}`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/iterators/ArrayKeyValueIterator.htmlArraySort
==========
package [haxe.ds](index)
*Available on all platforms*
ArraySort provides a stable implementation of merge sort through its `sort` method. It should be used instead of `[Array.sort](../../array#sort)` in cases where the order of equal elements has to be retained on all targets.
Static methods
--------------
### `static[sort](#sort)<T>(a:[Array](../../array "Array")<T>, cmp:(T, T) ‑> [Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Sorts Array `a` according to the comparison function `cmp`, where `cmp(x,y)` returns 0 if `x == y`, a positive Int if `x > y` and a negative Int if `x < y`.
This operation modifies Array `a` in place.
This operation is stable: The order of equal elements is preserved.
If `a` or `cmp` are null, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/ArraySort.htmlList<T>
========
package [haxe.ds](index)
*Available on all platforms*
A linked-list of elements. The list is composed of element container objects that are chained together. It is optimized so that adding or removing an element does not imply copying the whole list content every time.
See also:
* <https://haxe.org/manual/std-List.htmlConstructor
-----------
### `[new](#new)()`
Creates a new empty list.
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
The length of `this` List.
Methods
-------
### `[add](#add)(item:T):[Void](../../void "Void - The standard Void type.")`
Adds element `item` at the end of `this` List.
`this.[length](#length)` increases by 1.
### `[clear](#clear)():[Void](../../void "Void - The standard Void type.")`
Empties `this` List.
This function does not traverse the elements, but simply sets the internal references to null and `this.[length](#length)` to 0.
### `[filter](#filter)(f:T ‑> [Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")):[List](list "haxe.ds.List - A linked-list of elements.")<T>`
Returns a list filtered with `f`. The returned list will contain all elements for which `f(x) == [true](../../bool)`.
### `[first](#first)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
Returns the first element of `this` List, or null if no elements exist.
This function does not modify `this` List.
### `[isEmpty](#isEmpty)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `this` List is empty.
### `inline[iterator](#iterator)():ListIterator<T>`
Returns an iterator on the elements of the list.
### `[join](#join)(sep:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Returns a string representation of `this` List, with `sep` separating each element.
### `inline[keyValueIterator](#keyValueIterator)():ListKeyValueIterator<T>`
Returns an iterator of the List indices and values.
### `[last](#last)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
Returns the last element of `this` List, or null if no elements exist.
This function does not modify `this` List.
### `[map](#map)<X>(f:T ‑> X):[List](list "haxe.ds.List - A linked-list of elements.")<X>`
Returns a new list where all elements have been converted by the function `f`.
### `[pop](#pop)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
Returns the first element of `this` List, or null if no elements exist.
The element is removed from `this` List.
### `[push](#push)(item:T):[Void](../../void "Void - The standard Void type.")`
Adds element `item` at the beginning of `this` List.
`this.[length](#length)` increases by 1.
### `[remove](#remove)(v:T):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Removes the first occurrence of `v` in `this` List.
If `v` is found by checking standard equality, it is removed from `this` List and the function returns true.
Otherwise, false is returned.
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
Returns a string representation of `this` List.
The result is enclosed in { } with the individual elements being separated by a comma.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/List.htmlMap<K, V>([IMap](../imap "haxe.IMap")<K, V>)
=============================================
package [haxe.ds](index)
*Available on all platforms*
Map allows key to value mapping for arbitrary value types, and many key types.
This is a multi-type abstract, it is instantiated as one of its specialization types depending on its type parameters.
A Map can be instantiated without explicit type parameters. Type inference will then determine the type parameters from the usage.
Maps can also be created with `[key1 => value1, key2 => value2]` syntax.
Map is an abstract type, it is not available at runtime.
See also:
* <https://haxe.org/manual/std-Map.htmlMethods
-------
### `inline[clear](#clear)():[Void](../../void "Void - The standard Void type.")`
Removes all keys from `this` Map.
### `inline[copy](#copy)():[Map](map "haxe.ds.Map - Map allows key to value mapping for arbitrary value types, and many key types.")<K, V>`
Returns a shallow copy of `this` map.
The order of values is undefined.
### `inline[exists](#exists)(key:K):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns true if `key` has a mapping, false otherwise.
If `key` is `null`, the result is unspecified.
### `inline[get](#get)(key:K):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<V>`
Returns the current mapping of `key`.
If no such mapping exists, `null` is returned.
Note that a check like `map.get(key) == null` can hold for two reasons:
1. the map has no mapping for `key`
2. the map has a mapping with a value of `null`
If it is important to distinguish these cases, `exists()` should be used.
If `key` is `null`, the result is unspecified.
### `inline[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<V>`
Returns an Iterator over the values of `this` Map.
The order of values is undefined.
### `inline[keyValueIterator](#keyValueIterator)():[KeyValueIterator](../../keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<K, V>`
Returns an Iterator over the keys and values of `this` Map.
The order of values is undefined.
### `inline[keys](#keys)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<K>`
Returns an Iterator over the keys of `this` Map.
The order of keys is undefined.
### `inline[remove](#remove)(key:K):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Removes the mapping of `key` and returns true if such a mapping existed, false otherwise.
If `key` is `null`, the result is unspecified.
### `inline[set](#set)(key:K, value:V):[Void](../../void "Void - The standard Void type.")`
Maps `key` to `value`.
If `key` already has a mapping, the previous value disappears.
If `key` is `null`, the result is unspecified.
### `inline[toString](#toString)():[String](../../string "String - The basic String class.")`
Returns a String representation of `this` Map.
The exact representation depends on the platform and key-type.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/Map.htmlBytes
======
package [haxe.io](index)
*Available on all platforms*
Static methods
--------------
### `static[alloc](#alloc)(length:[Int](../../int "Int - The standard Int type.")):[Bytes](bytes "haxe.io.Bytes")`
Returns a new `[Bytes](bytes#Bytes)` instance with the given `length`. The values of the bytes are not initialized and may not be zero.
### `static[fastGet](#fastGet)(b:[BytesData](bytesdata "haxe.io.BytesData"), pos:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
Reads the `pos`-th byte of the given `b` bytes, in the most efficient way possible. Behavior when reading outside of the available data is unspecified.
### `static[ofData](#ofData)(b:[BytesData](bytesdata "haxe.io.BytesData")):[Bytes](bytes "haxe.io.Bytes")`
Returns the `[Bytes](bytes#Bytes)` representation of the given `[BytesData](bytesdata#BytesData)`.
### `static[ofHex](#ofHex)(s:[String](../../string "String - The basic String class.")):[Bytes](bytes "haxe.io.Bytes")`
Converts the given hexadecimal `[String](../../string)` to `[Bytes](bytes#Bytes)`. `s` must be a string of even length consisting only of hexadecimal digits. For example: `"0FDA14058916052309"`.
### `static[ofString](#ofString)(s:[String](../../string "String - The basic String class."), ?encoding:[Encoding](encoding "haxe.io.Encoding")):[Bytes](bytes "haxe.io.Bytes")`
Returns the `[Bytes](bytes#Bytes)` representation of the given `[String](../../string)`, using the specified encoding (UTF-8 by default).
Constructor
-----------
### `[new](#new)(length:[Int](../../int "Int - The standard Int type."), b:[BytesData](bytesdata "haxe.io.BytesData"))`
*Available on macro*
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
Methods
-------
### `[blit](#blit)(pos:[Int](../../int "Int - The standard Int type."), src:[Bytes](bytes "haxe.io.Bytes"), srcpos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Copies `len` bytes from `src` into this instance.
Parameters:
| | |
| --- | --- |
| `pos` | Zero-based location in `this` instance at which to start writing bytes. |
| `src` | Source `[Bytes](bytes#Bytes)` instance from which to copy bytes. |
| `srcpos` | Zero-based location at `src` from which bytes will be copied. |
| `len` | Number of bytes to be copied. |
### `[compare](#compare)(other:[Bytes](bytes "haxe.io.Bytes")):[Int](../../int "Int - The standard Int type.")`
Returns `0` if the bytes of `this` instance and the bytes of `other` are identical.
Returns a negative value if the `length` of `this` instance is less than the `length` of `other`, or a positive value if the `length` of `this` instance is greater than the `length` of `other`.
In case of equal `length`s, returns a negative value if the first different value in `other` is greater than the corresponding value in `this` instance; otherwise returns a positive value.
### `[fill](#fill)(pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type."), value:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Sets `len` consecutive bytes starting from index `pos` of `this` instance to `value`.
### `[get](#get)(pos:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
Returns the byte at index `pos`.
### `[getData](#getData)():[BytesData](bytesdata "haxe.io.BytesData")`
Returns the bytes of `this` instance as `[BytesData](bytesdata#BytesData)`.
### `[getDouble](#getDouble)(pos:[Int](../../int "Int - The standard Int type.")):[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the IEEE double-precision value at the given position `pos` (in little-endian encoding). Result is unspecified if `pos` is outside the bounds.
### `[getFloat](#getFloat)(pos:[Int](../../int "Int - The standard Int type.")):[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Returns the IEEE single-precision value at the given position `pos` (in little-endian encoding). Result is unspecified if `pos` is outside the bounds.
### `[getInt32](#getInt32)(pos:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
Returns the 32-bit integer at the given position `pos` (in little-endian encoding).
### `[getInt64](#getInt64)(pos:[Int](../../int "Int - The standard Int type.")):[Int64](../int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns the 64-bit integer at the given position `pos` (in little-endian encoding).
### `[getString](#getString)(pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type."), ?encoding:[Encoding](encoding "haxe.io.Encoding")):[String](../../string "String - The basic String class.")`
Returns the `len`-bytes long string stored at the given position `pos`, interpreted with the given `encoding` (UTF-8 by default).
### `[getUInt16](#getUInt16)(pos:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
Returns the 16-bit unsigned integer at the given position `pos` (in little-endian encoding).
### `[set](#set)(pos:[Int](../../int "Int - The standard Int type."), v:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Stores the given byte `v` at the given position `pos`.
### `[setDouble](#setDouble)(pos:[Int](../../int "Int - The standard Int type."), v:[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Void](../../void "Void - The standard Void type.")`
Stores the given IEEE double-precision value `v` at the given position `pos` in little-endian encoding. Result is unspecified if writing outside of bounds.
### `[setFloat](#setFloat)(pos:[Int](../../int "Int - The standard Int type."), v:[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Void](../../void "Void - The standard Void type.")`
Stores the given IEEE single-precision value `v` at the given position `pos` in little-endian encoding. Result is unspecified if writing outside of bounds.
### `[setInt32](#setInt32)(pos:[Int](../../int "Int - The standard Int type."), v:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Stores the given 32-bit integer `v` at the given position `pos` (in little-endian encoding).
### `[setInt64](#setInt64)(pos:[Int](../../int "Int - The standard Int type."), v:[Int64](../int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Void](../../void "Void - The standard Void type.")`
Stores the given 64-bit integer `v` at the given position `pos` (in little-endian encoding).
### `[setUInt16](#setUInt16)(pos:[Int](../../int "Int - The standard Int type."), v:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Stores the given 16-bit unsigned integer `v` at the given position `pos` (in little-endian encoding).
### `[sub](#sub)(pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Bytes](bytes "haxe.io.Bytes")`
Returns a new `[Bytes](bytes#Bytes)` instance that contains a copy of `len` bytes of `this` instance, starting at index `pos`.
### `[toHex](#toHex)():[String](../../string "String - The basic String class.")`
Returns a hexadecimal `[String](../../string)` representation of the bytes of `this` instance.
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
Returns a `[String](../../string)` representation of the bytes interpreted as UTF-8.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Bytes.htmlEncoding
=========
package [haxe.io](index)
*Available on all platforms*
String binary encoding supported by Haxe I/O
Values
------
### `UTF8`
### `RawNative`
Output the string the way the platform represent it in memory. This is the most efficient but is platform-specific
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Encoding.htmlStringIteratorUnicode
======================
package [haxe.iterators](index)
*Available on all platforms*
This iterator can be used to iterate across strings in a cross-platform way. It handles surrogate pairs on platforms that require it. On each iteration, it returns the next character code.
Note that this has different semantics than a standard for-loop over the String's length due to the fact that it deals with surrogate pairs.
Static methods
--------------
### `staticinline[unicodeIterator](#unicodeIterator)(s:[String](../../string "String - The basic String class.")):[StringIteratorUnicode](stringiteratorunicode "haxe.iterators.StringIteratorUnicode - This iterator can be used to iterate across strings in a cross-platform way.")`
Convenience function which can be used as a static extension.
Constructor
-----------
### `inline[new](#new)(s:[String](../../string "String - The basic String class."))`
Create a new `[StringIteratorUnicode](stringiteratorunicode#StringIteratorUnicode)` over String `s`.
Methods
-------
### `inline[hasNext](#hasNext)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Iterator.hasNext](../../iterator#hasNext)`
### `inline[next](#next)():[Int](../../int "Int - The standard Int type.")`
### See `[Iterator.next](../../iterator#next)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/iterators/StringIteratorUnicode.htmlStringKeyValueIteratorUnicode
==============================
package [haxe.iterators](index)
*Available on all platforms*
This iterator can be used to iterate across strings in a cross-platform way. It handles surrogate pairs on platforms that require it. On each iteration, it returns the next character offset as key and the next character code as value.
Note that in the general case, because of surrogate pairs, the key values should not be used as offsets for various String API operations. For the same reason, the last key value returned might be less than `s.length - 1`.
Static methods
--------------
### `staticinline[unicodeKeyValueIterator](#unicodeKeyValueIterator)(s:[String](../../string "String - The basic String class.")):[StringKeyValueIteratorUnicode](stringkeyvalueiteratorunicode "haxe.iterators.StringKeyValueIteratorUnicode - This iterator can be used to iterate across strings in a cross-platform way.")`
Convenience function which can be used as a static extension.
Constructor
-----------
### `inline[new](#new)(s:[String](../../string "String - The basic String class."))`
Create a new `[StringKeyValueIteratorUnicode](stringkeyvalueiteratorunicode#StringKeyValueIteratorUnicode)` over String `s`.
Methods
-------
### `inline[hasNext](#hasNext)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Iterator.hasNext](../../iterator#hasNext)`
### `inline[next](#next)():{value:[Int](../../int "Int - The standard Int type."), key:[Int](../../int "Int - The standard Int type.")}`
### See `[Iterator.next](../../iterator#next)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/iterators/StringKeyValueIteratorUnicode.htmlAdler32
========
package [haxe.crypto](index)
*Available on all platforms*
Calculates the Adler32 of the given Bytes.
Static methods
--------------
### `static[make](#make)(b:[Bytes](../io/bytes "haxe.io.Bytes")):[Int](../../int "Int - The standard Int type.")`
### `static[read](#read)(i:[Input](../io/input "haxe.io.Input - An Input is an abstract reader.")):[Adler32](adler32 "haxe.crypto.Adler32 - Calculates the Adler32 of the given Bytes.")`
Constructor
-----------
### `[new](#new)()`
Methods
-------
### `[equals](#equals)(a:[Adler32](adler32 "haxe.crypto.Adler32 - Calculates the Adler32 of the given Bytes.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[get](#get)():[Int](../../int "Int - The standard Int type.")`
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
### `[update](#update)(b:[Bytes](../io/bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/crypto/Adler32.htmlBase64
=======
package [haxe.crypto](index)
*Available on all platforms*
Allows one to encode/decode String and bytes using Base64 encoding.
Static variables
----------------
### `staticread only[BYTES](#BYTES):[Bytes](../io/bytes "haxe.io.Bytes") = haxe.io.Bytes.ofString(CHARS)`
### `staticread only[CHARS](#CHARS):[String](../../string "String - The basic String class.") = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"`
### `staticread only[URL_BYTES](#URL_BYTES):[Bytes](../io/bytes "haxe.io.Bytes") = haxe.io.Bytes.ofString(URL_CHARS)`
### `staticread only[URL_CHARS](#URL_CHARS):[String](../../string "String - The basic String class.") = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_"`
Static methods
--------------
### `static[decode](#decode)(str:[String](../../string "String - The basic String class."), complement:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true):[Bytes](../io/bytes "haxe.io.Bytes")`
### `static[encode](#encode)(bytes:[Bytes](../io/bytes "haxe.io.Bytes"), complement:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true):[String](../../string "String - The basic String class.")`
### `static[urlDecode](#urlDecode)(str:[String](../../string "String - The basic String class."), complement:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = false):[Bytes](../io/bytes "haxe.io.Bytes")`
### `static[urlEncode](#urlEncode)(bytes:[Bytes](../io/bytes "haxe.io.Bytes"), complement:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = false):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/crypto/Base64.htmlBaseCode
=========
package [haxe.crypto](index)
*Available on all platforms*
Allows one to encode/decode String and bytes using a power of two base dictionary.
Static methods
--------------
### `static[decode](#decode)(s:[String](../../string "String - The basic String class."), base:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
### `static[encode](#encode)(s:[String](../../string "String - The basic String class."), base:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Constructor
-----------
### `[new](#new)(base:[Bytes](../io/bytes "haxe.io.Bytes"))`
Methods
-------
### `[decodeBytes](#decodeBytes)(b:[Bytes](../io/bytes "haxe.io.Bytes")):[Bytes](../io/bytes "haxe.io.Bytes")`
### `[decodeString](#decodeString)(s:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
### `[encodeBytes](#encodeBytes)(b:[Bytes](../io/bytes "haxe.io.Bytes")):[Bytes](../io/bytes "haxe.io.Bytes")`
### `[encodeString](#encodeString)(s:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/crypto/BaseCode.htmlCrc32
======
package [haxe.crypto](index)
*Available on all platforms*
Calculates the Crc32 of the given Bytes.
Static methods
--------------
### `static[make](#make)(data:[Bytes](../io/bytes "haxe.io.Bytes")):[Int](../../int "Int - The standard Int type.")`
Calculates the CRC32 of the given data bytes
Constructor
-----------
### `inline[new](#new)()`
Methods
-------
### `inline[byte](#byte)(b:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
### `inline[get](#get)():[Int](../../int "Int - The standard Int type.")`
### `inline[update](#update)(b:[Bytes](../io/bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/crypto/Crc32.htmlHashMethod
===========
package [haxe.crypto](index)
import [haxe.crypto.Hmac](hmac)
*Available on all platforms*
Hash methods for Hmac calculation.
Values
------
### `MD5`
### `SHA1`
### `SHA256`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/crypto/HashMethod.htmlHmac
=====
package [haxe.crypto](index)
*Available on all platforms*
Calculates a Hmac of the given Bytes using a HashMethod.
Constructor
-----------
### `[new](#new)(hashMethod:[HashMethod](hashmethod "haxe.crypto.HashMethod - Hash methods for Hmac calculation."))`
Methods
-------
### `[make](#make)(key:[Bytes](../io/bytes "haxe.io.Bytes"), msg:[Bytes](../io/bytes "haxe.io.Bytes")):[Bytes](../io/bytes "haxe.io.Bytes")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/crypto/Hmac.htmlMd5
====
package [haxe.crypto](index)
*Available on all platforms*
Creates a MD5 of a String.
Static methods
--------------
### `static[encode](#encode)(s:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
### `static[make](#make)(b:[Bytes](../io/bytes "haxe.io.Bytes")):[Bytes](../io/bytes "haxe.io.Bytes")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/crypto/Md5.htmlSha1
=====
package [haxe.crypto](index)
*Available on all platforms*
Creates a Sha1 of a String.
Static methods
--------------
### `static[encode](#encode)(s:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
### `static[make](#make)(b:[Bytes](../io/bytes "haxe.io.Bytes")):[Bytes](../io/bytes "haxe.io.Bytes")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/crypto/Sha1.htmlSha224
=======
package [haxe.crypto](index)
*Available on all platforms*
Creates a Sha224 of a String.
Static methods
--------------
### `static[encode](#encode)(s:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
### `static[make](#make)(b:[Bytes](../io/bytes "haxe.io.Bytes")):[Bytes](../io/bytes "haxe.io.Bytes")`
Constructor
-----------
### `[new](#new)()`
*Available on cs, js, neko, cpp, macro, java, lua, python, hl, flash*
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/crypto/Sha224.htmlSha256
=======
package [haxe.crypto](index)
*Available on all platforms*
Creates a Sha256 of a String.
Static methods
--------------
### `static[encode](#encode)(s:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
### `static[make](#make)(b:[Bytes](../io/bytes "haxe.io.Bytes")):[Bytes](../io/bytes "haxe.io.Bytes")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/crypto/Sha256.htmlStringIterator
===============
package [haxe.iterators](index)
*Available on all platforms*
This iterator can be used to iterate over char codes in a string.
Note that char codes may differ across platforms because of different internal encoding of strings in different of runtimes.
Constructor
-----------
### `inline[new](#new)(s:[String](../../string "String - The basic String class."))`
Create a new `[StringIterator](stringiterator#StringIterator)` over String `s`.
Methods
-------
### `inline[hasNext](#hasNext)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Iterator.hasNext](../../iterator#hasNext)`
### `inline[next](#next)():[Int](../../int "Int - The standard Int type.")`
### See `[Iterator.next](../../iterator#next)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/iterators/StringIterator.htmlStringKeyValueIterator
=======================
package [haxe.iterators](index)
*Available on all platforms*
This iterator can be used to iterate over char indexes and char codes in a string.
Note that char codes may differ across platforms because of different internal encoding of strings in different runtimes.
Constructor
-----------
### `inline[new](#new)(s:[String](../../string "String - The basic String class."))`
Create a new `[StringKeyValueIterator](stringkeyvalueiterator#StringKeyValueIterator)` over String `s`.
Methods
-------
### `inline[hasNext](#hasNext)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[KeyValueIterator.hasNext](../../keyvalueiterator#hasNext)`
### `inline[next](#next)():{value:[Int](../../int "Int - The standard Int type."), key:[Int](../../int "Int - The standard Int type.")}`
### See `[KeyValueIterator.next](../../keyvalueiterator#next)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/iterators/StringKeyValueIterator.htmlClassFieldOccurrence<T>
========================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[resolution](#resolution):[FieldResolution](fieldresolution "haxe.display.FieldResolution")`
### `optional[origin](#origin):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ClassFieldOrigin](classfieldorigin "haxe.display.ClassFieldOrigin")<T>>`
### `[field](#field):[JsonClassField](jsonclassfield "haxe.display.JsonClassField")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ClassFieldOccurrence.htmlClassFieldOrigin<T>
====================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[kind](#kind):[ClassFieldOriginKind](classfieldoriginkind "haxe.display.ClassFieldOriginKind")<T>`
### `optional[args](#args):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ClassFieldOrigin.htmlClassFieldOriginKind<T>([Int](../../int "Int - The standard Int type."))
=========================================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[AnonymousStructure](#AnonymousStructure):[ClassFieldOriginKind](classfieldoriginkind "haxe.display.ClassFieldOriginKind")<[JsonAnon](jsonanon "haxe.display.JsonAnon")> = 4`
This field doesn't belong to any named type, just an anonymous structure.
### `inlineread only[BuiltIn](#BuiltIn):[ClassFieldOriginKind](classfieldoriginkind "haxe.display.ClassFieldOriginKind")<[NoData](nodata "haxe.display.NoData")> = 5`
Special fields built into the compiler, such as: - `code` on single-character Strings - `bind()` on functions.
### `inlineread only[Parent](#Parent):[ClassFieldOriginKind](classfieldoriginkind "haxe.display.ClassFieldOriginKind")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<T>> = 2`
The field is declared on a parent type, such as: - a super class field that is not overriden - a forwarded abstract field
### `inlineread only[Self](#Self):[ClassFieldOriginKind](classfieldoriginkind "haxe.display.ClassFieldOriginKind")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<T>> = 0`
The field is declared on the current type itself.
### `inlineread only[StaticExtension](#StaticExtension):[ClassFieldOriginKind](classfieldoriginkind "haxe.display.ClassFieldOriginKind")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<T>> = 3`
The field is a static extension method brought into context with the `using` keyword.
### `inlineread only[StaticImport](#StaticImport):[ClassFieldOriginKind](classfieldoriginkind "haxe.display.ClassFieldOriginKind")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<T>> = 1`
The field is a static field brought into context via a static import (`import [pack.Module.Type.field](../../type#field)`).
### `inlineread only[Unknown](#Unknown):[ClassFieldOriginKind](classfieldoriginkind "haxe.display.ClassFieldOriginKind")<[NoData](nodata "haxe.display.NoData")> = 6`
The origin of this class field is unknown.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ClassFieldOriginKind.htmlCompletionItemResolveParams
============================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
CompletionItem Resolve
Fields
------
### `[index](#index):[Int](../../int "Int - The standard Int type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/CompletionItemResolveParams.htmlCompletionItemResolveResult
============================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Alias
-----
*alias for* `[haxe.display.Response](response "haxe.display.Response")<{item:[haxe.display.DisplayItem](displayitem "haxe.display.DisplayItem")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>}>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/CompletionItemResolveResult.htmlCompletionResult
=================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Alias
-----
*alias for* `[haxe.display.Response](response "haxe.display.Response")<[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[haxe.display.CompletionResponse](completionresponse "haxe.display.CompletionResponse")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), [Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/CompletionResult.htmlCompletionMode<T>
==================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[kind](#kind):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T>`
### `optional[args](#args):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/CompletionMode.htmlCompletionModeKind<T>([Int](../../int "Int - The standard Int type."))
=======================================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[Extends](#Extends):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 5`
### `inlineread only[Field](#Field):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<[FieldCompletionSubject](fieldcompletionsubject "haxe.display.FieldCompletionSubject")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>> = 0`
### `inlineread only[Implements](#Implements):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 6`
### `inlineread only[Import](#Import):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 8`
### `inlineread only[Metadata](#Metadata):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 3`
### `inlineread only[New](#New):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 10`
### `inlineread only[Override](#Override):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 12`
### `inlineread only[Pattern](#Pattern):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<[PatternCompletion](patterncompletion "haxe.display.PatternCompletion")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>> = 11`
### `inlineread only[StructExtension](#StructExtension):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<[StructExtensionCompletion](structextensioncompletion "haxe.display.StructExtensionCompletion")> = 7`
### `inlineread only[StructureField](#StructureField):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 1`
### `inlineread only[Toplevel](#Toplevel):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<[ToplevelCompletion](toplevelcompletion "haxe.display.ToplevelCompletion")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>> = 2`
### `inlineread only[TypeDeclaration](#TypeDeclaration):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 14`
### `inlineread only[TypeHint](#TypeHint):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 4`
### `inlineread only[TypeRelation](#TypeRelation):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 13`
### `inlineread only[Using](#Using):[CompletionModeKind](completionmodekind "haxe.display.CompletionModeKind")<T> = 9`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/CompletionModeKind.htmlCompletionParams
=================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Completion
Fields
------
### `[wasAutoTriggered](#wasAutoTriggered):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[offset](#offset):[Int](../../int "Int - The standard Int type.")`
Unicode character offset in the file.### `optional[meta](#meta):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[String](../../string "String - The basic String class.")>>`
list of metas to include in responses### `[file](#file):[FsPath](fspath "haxe.display.FsPath")`
### `optional[contents](#contents):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/CompletionParams.htmlConfigureParams
================
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `optionalfinalread only[print](#print):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ConfigurePrintParams](configureprintparams "haxe.display.ConfigurePrintParams")>`
### `optionalfinalread only[noModuleChecks](#noModuleChecks):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optionalfinalread only[legacyCompletion](#legacyCompletion):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ConfigureParams.htmlContextParams
==============
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `finalread only[signature](#signature):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ContextParams.htmlCompletionResponse<T1, T2>
===========================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `optional[replaceRange](#replaceRange):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Range](range "haxe.display.Range - A range in a text document expressed as (1-based) start and end positions.")>`
### `[mode](#mode):[CompletionMode](completionmode "haxe.display.CompletionMode")<T2>`
### `[items](#items):[Array](../../array "Array")<[DisplayItem](displayitem "haxe.display.DisplayItem")<T1>>`
### `optional[isIncomplete](#isIncomplete):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[filterString](#filterString):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/CompletionResponse.htmlConfigurePrintParams
=====================
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `optional[unchangedContent](#unchangedContent):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[uncaughtError](#uncaughtError):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[stats](#stats):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[socketMessage](#socketMessage):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[skippingDep](#skippingDep):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[signature](#signature):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[reusing](#reusing):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[removedDirectory](#removedDirectory):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[parsed](#parsed):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[notCached](#notCached):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[newContext](#newContext):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[modulePathChanged](#modulePathChanged):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[message](#message):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[foundDirectories](#foundDirectories):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[displayPosition](#displayPosition):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[defines](#defines):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[completion](#completion):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[changedDirectories](#changedDirectories):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[cachedModules](#cachedModules):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[arguments](#arguments):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optional[addedDirectory](#addedDirectory):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ConfigurePrintParams.htmlDefine
=======
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[value](#value):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
### `[platforms](#platforms):[Array](../../array "Array")<[Platform](platform "haxe.display.Platform")>`
### `[parameters](#parameters):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
### `[name](#name):[String](../../string "String - The basic String class.")`
### `[links](#links):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
### `[doc](#doc):[JsonDoc](jsondoc "haxe.display.JsonDoc")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Define.htmlDeterminePackageResult
=======================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
DeterminePackage
Alias
-----
*alias for* `[haxe.display.Response](response "haxe.display.Response")<[Array](../../array "Array")<[String](../../string "String - The basic String class.")>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/DeterminePackageResult.htmlDisplayItem<T>
===============
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `optional[type](#type):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
### `[kind](#kind):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<T>`
### `optional[index](#index):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Int](../../int "Int - The standard Int type.")>`
### `[args](#args):T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/DisplayItem.htmlDisplayItemKind<T>([String](../../string "String - The basic String class."))
==============================================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[AnonymousStructure](#AnonymousStructure):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[JsonAnon](jsonanon "haxe.display.JsonAnon")> = "AnonymousStructure"`
### `inlineread only[ClassField](#ClassField):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[ClassFieldOccurrence](classfieldoccurrence "haxe.display.ClassFieldOccurrence")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>> = "ClassField"`
### `inlineread only[Define](#Define):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[Define](define "haxe.display.Define")> = "Define"`
### `inlineread only[EnumAbstractField](#EnumAbstractField):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[ClassFieldOccurrence](classfieldoccurrence "haxe.display.ClassFieldOccurrence")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>> = "EnumAbstractField"`
Only for the enum values in enum abstracts, other fields use `[ClassField](../macro/classfield#ClassField)`.
### `inlineread only[EnumField](#EnumField):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[EnumFieldOccurrence](enumfieldoccurrence "haxe.display.EnumFieldOccurrence")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>> = "EnumField"`
### `inlineread only[Expression](#Expression):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[JsonTExpr](jsontexpr "haxe.display.JsonTExpr")> = "Expression"`
### `inlineread only[Keyword](#Keyword):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[Keyword](keyword "haxe.display.Keyword")> = "Keyword"`
### `inlineread only[Literal](#Literal):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[DisplayLiteral](displayliteral "haxe.display.DisplayLiteral")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>> = "Literal"`
### `inlineread only[Local](#Local):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[DisplayLocal](displaylocal "haxe.display.DisplayLocal")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>> = "Local"`
### `inlineread only[Metadata](#Metadata):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[Metadata](metadata "haxe.display.Metadata")> = "Metadata"`
### `inlineread only[Module](#Module):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[Module](module "haxe.display.Module")> = "Module"`
### `inlineread only[Package](#Package):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[Package](package "haxe.display.Package")> = "Package"`
### `inlineread only[Type](#Type):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[DisplayModuleType](displaymoduletype "haxe.display.DisplayModuleType")> = "Type"`
### `inlineread only[TypeParameter](#TypeParameter):[DisplayItemKind](displayitemkind "haxe.display.DisplayItemKind")<[DisplayModuleTypeParameter](displaymoduletypeparameter "haxe.display.DisplayModuleTypeParameter")> = "TypeParameter"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/DisplayItemKind.htmlDisplayItemOccurrence<T>
=========================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[range](#range):[Range](range "haxe.display.Range - A range in a text document expressed as (1-based) start and end positions.")`
### `optional[moduleTypeFollowed](#moduleTypeFollowed):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
### `optional[moduleType](#moduleType):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
### `[item](#item):[DisplayItem](displayitem "haxe.display.DisplayItem")<T>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/DisplayItemOccurrence.htmlDisplayLiteral<T>
==================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[name](#name):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/DisplayLiteral.htmlDisplayLocal<T>
================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[type](#type):[JsonType](jsontype "haxe.display.JsonType")<T>`
### `[pos](#pos):[JsonPos](jsonpos "haxe.display.JsonPos")`
### `[origin](#origin):[LocalOrigin](localorigin "haxe.display.LocalOrigin")`
### `[name](#name):[String](../../string "String - The basic String class.")`
### `[meta](#meta):[JsonMetadata](jsonmetadata "haxe.display.JsonMetadata")`
### `[isInline](#isInline):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[isFinal](#isFinal):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[id](#id):[Int](../../int "Int - The standard Int type.")`
### `optional[extra](#extra):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{params:[Array](../../array "Array")<[JsonTypeParameter](jsontypeparameter "haxe.display.JsonTypeParameter")>, expr:[JsonExpr](jsonexpr "haxe.display.JsonExpr")}>`
### `[capture](#capture):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/DisplayLocal.htmlDisplayMethods
===============
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Methods of the JSON-RPC-based `--display` protocol in Haxe 4. A lot of the methods are *inspired* by the Language Server Protocol, but there is **no** intention to be directly compatible with it.
Static variables
----------------
### `staticinlineread only[Completion](#Completion):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[CompletionParams](completionparams "haxe.display.CompletionParams"), [CompletionResult](completionresult "haxe.display.CompletionResult")> = new HaxeRequestMethod<CompletionParams,CompletionResult>("display/completion")`
The completion request is sent from the client to Haxe to request code completion. Haxe automatically determines the type of completion to use based on the passed position, see `CompletionResultKind`.
### `staticinlineread only[CompletionItemResolve](#CompletionItemResolve):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[CompletionItemResolveParams](completionitemresolveparams "haxe.display.CompletionItemResolveParams"), [CompletionItemResolveResult](completionitemresolveresult "haxe.display.CompletionItemResolveResult")> = new HaxeRequestMethod<CompletionItemResolveParams,CompletionItemResolveResult>("display/completionItem/resolve")`
The request is sent from the client to Haxe to resolve additional information for a given completion item.
### `staticinlineread only[DeterminePackage](#DeterminePackage):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[FileParams](fileparams "haxe.display.FileParams"), [DeterminePackageResult](determinepackageresult "haxe.display.DeterminePackageResult")> = new HaxeRequestMethod<FileParams,DeterminePackageResult>("display/package")`
This request is sent from the client to Haxe to determine the package for a given file, based on class paths configuration.
### `staticinlineread only[FindReferences](#FindReferences):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[FindReferencesParams](findreferencesparams "haxe.display.FindReferencesParams"), [GotoDefinitionResult](gotodefinitionresult "haxe.display.GotoDefinitionResult")> = new HaxeRequestMethod<FindReferencesParams,GotoDefinitionResult>("display/references")`
The find references request is sent from the client to Haxe to find locations that reference the symbol at a given text document position.
### `staticinlineread only[GotoDefinition](#GotoDefinition):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[PositionParams](positionparams "haxe.display.PositionParams"), [GotoDefinitionResult](gotodefinitionresult "haxe.display.GotoDefinitionResult")> = new HaxeRequestMethod<PositionParams,GotoDefinitionResult>("display/definition")`
The goto definition request is sent from the client to Haxe to resolve the definition location(s) of a symbol at a given text document position.
### `staticinlineread only[GotoImplementation](#GotoImplementation):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[PositionParams](positionparams "haxe.display.PositionParams"), [GotoDefinitionResult](gotodefinitionresult "haxe.display.GotoDefinitionResult")> = new HaxeRequestMethod<PositionParams,GotoDefinitionResult>("display/implementation")`
The goto implementation request is sent from the client to Haxe to resolve the implementation location(s) of a symbol at a given text document position.
### `staticinlineread only[GotoTypeDefinition](#GotoTypeDefinition):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[PositionParams](positionparams "haxe.display.PositionParams"), [GotoTypeDefinitionResult](gototypedefinitionresult "haxe.display.GotoTypeDefinitionResult")> = new HaxeRequestMethod<PositionParams,GotoTypeDefinitionResult>("display/typeDefinition")`
The goto type definition request is sent from the client to Haxe to resolve the type definition location(s) of a symbol at a given text document position.
### `staticinlineread only[Hover](#Hover):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[PositionParams](positionparams "haxe.display.PositionParams"), [HoverResult](hoverresult "haxe.display.HoverResult")> = new HaxeRequestMethod<PositionParams,HoverResult>("display/hover")`
The hover request is sent from the client to Haxe to request hover information at a given text document position.
### `staticinlineread only[SignatureHelp](#SignatureHelp):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[SignatureHelpParams](signaturehelpparams "haxe.display.SignatureHelpParams"), [SignatureHelpResult](signaturehelpresult "haxe.display.SignatureHelpResult")> = new HaxeRequestMethod<SignatureHelpParams,SignatureHelpResult>("display/signatureHelp")`
The signature help request is sent from the client to Haxe to request signature information at a given cursor position.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/DisplayMethods.htmlDisplayModuleType
==================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[pos](#pos):[JsonPos](jsonpos "haxe.display.JsonPos")`
### `[path](#path):[JsonTypePath](jsontypepath "haxe.display.JsonTypePath")`
### `[params](#params):[Array](../../array "Array")<[DisplayModuleTypeParameter](displaymoduletypeparameter "haxe.display.DisplayModuleTypeParameter")>`
### `[meta](#meta):[JsonMetadata](jsonmetadata "haxe.display.JsonMetadata")`
### `[kind](#kind):[DisplayModuleTypeKind](displaymoduletypekind "haxe.display.DisplayModuleTypeKind")`
### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[isFinal](#isFinal):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[doc](#doc):[JsonDoc](jsondoc "haxe.display.JsonDoc")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/DisplayModuleType.htmlDisplayModuleTypeParameter
===========================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[name](#name):[String](../../string "String - The basic String class.")`
### `[meta](#meta):[JsonMetadata](jsonmetadata "haxe.display.JsonMetadata")`
### `[constraints](#constraints):[Array](../../array "Array")<[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/DisplayModuleTypeParameter.htmlEnumFieldOccurrence<T>
=======================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[resolution](#resolution):[FieldResolution](fieldresolution "haxe.display.FieldResolution")`
### `optional[origin](#origin):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[EnumFieldOrigin](enumfieldorigin "haxe.display.EnumFieldOrigin")<T>>`
### `[field](#field):[JsonEnumField](jsonenumfield "haxe.display.JsonEnumField")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/EnumFieldOccurrence.htmlDisplayModuleTypeKind([Int](../../int "Int - The standard Int type."))
=======================================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[Abstract](#Abstract):[DisplayModuleTypeKind](displaymoduletypekind "haxe.display.DisplayModuleTypeKind") = 3`
### `inlineread only[Class](#Class):[DisplayModuleTypeKind](displaymoduletypekind "haxe.display.DisplayModuleTypeKind") = 0`
### `inlineread only[Enum](#Enum):[DisplayModuleTypeKind](displaymoduletypekind "haxe.display.DisplayModuleTypeKind") = 2`
### `inlineread only[EnumAbstract](#EnumAbstract):[DisplayModuleTypeKind](displaymoduletypekind "haxe.display.DisplayModuleTypeKind") = 4`
### `inlineread only[Interface](#Interface):[DisplayModuleTypeKind](displaymoduletypekind "haxe.display.DisplayModuleTypeKind") = 1`
### `inlineread only[Struct](#Struct):[DisplayModuleTypeKind](displaymoduletypekind "haxe.display.DisplayModuleTypeKind") = 6`
A `typedef` that is an alias for an anonymous structure.
### `inlineread only[TypeAlias](#TypeAlias):[DisplayModuleTypeKind](displaymoduletypekind "haxe.display.DisplayModuleTypeKind") = 5`
A `typedef` that is just an alias for another type.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/DisplayModuleTypeKind.htmlEnumFieldOrigin<T>
===================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[kind](#kind):[EnumFieldOriginKind](enumfieldoriginkind "haxe.display.EnumFieldOriginKind")<T>`
### `optional[args](#args):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/EnumFieldOrigin.htmlEnumFieldOriginKind<T>([Int](../../int "Int - The standard Int type."))
========================================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[Self](#Self):[EnumFieldOriginKind](enumfieldoriginkind "haxe.display.EnumFieldOriginKind")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<T>> = 0`
The enum value is declared on the current type itself.
### `inlineread only[StaticImport](#StaticImport):[EnumFieldOriginKind](enumfieldoriginkind "haxe.display.EnumFieldOriginKind")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<T>> = 1`
The enum value is brought into context via a static import (`import [pack.Module.Enum.Value](../../enum#Value)`).
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/EnumFieldOriginKind.htmlFieldCompletionSubject<T>
==========================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[range](#range):[Range](range "haxe.display.Range - A range in a text document expressed as (1-based) start and end positions.")`
### `optional[moduleTypeFollowed](#moduleTypeFollowed):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
### `optional[moduleType](#moduleType):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
### `optional[keyValueIterator](#keyValueIterator):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{value:[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>, key:[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>}>`
### `optional[iterator](#iterator):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{type:[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>}>`
### `[item](#item):[DisplayItem](displayitem "haxe.display.DisplayItem")<T>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/FieldCompletionSubject.htmlFieldResolution
================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[qualifier](#qualifier):[String](../../string "String - The basic String class.")`
The qualifier that has to be inserted to use the field if `!isQualified`. Can either be `this` or `super` for instance fields for the type name for `static` fields.### `[isQualified](#isQualified):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether it's valid to use the unqualified name of the field or not. This is `false` if the identifier is shadowed.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/FieldResolution.htmlFileParams
===========
package [haxe.display](index)
import [haxe.display.Protocol](protocol)
*Available on all platforms*
Fields
------
### `[file](#file):[FsPath](fspath "haxe.display.FsPath")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/FileParams.htmlFindReferencesKind([String](../../string "String - The basic String class."))
==============================================================================
package [haxe.display](index)
to [String](../../string "String - The basic String class.")
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[Direct](#Direct):[FindReferencesKind](findreferenceskind "haxe.display.FindReferencesKind") = "direct"`
Find only direct references to the requested symbol. Does not look for references to parent or overriding methods.
### `inlineread only[WithBaseAndDescendants](#WithBaseAndDescendants):[FindReferencesKind](findreferenceskind "haxe.display.FindReferencesKind") = "withBaseAndDescendants"`
Find references to the base field and all the overidding fields in the inheritance chain.
### `inlineread only[WithDescendants](#WithDescendants):[FindReferencesKind](findreferenceskind "haxe.display.FindReferencesKind") = "withDescendants"`
Find references to the requested field and references to all descendants of the requested field.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/FindReferencesKind.htmlFsPath([String](../../string "String - The basic String class."))
==================================================================
package [haxe.display](index)
*Available on all platforms*
Methods
-------
### `inline[toString](#toString)():[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/FsPath.htmlFindReferencesParams
=====================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
FindReferences
Fields
------
### `[offset](#offset):[Int](../../int "Int - The standard Int type.")`
Unicode character offset in the file.### `optional[kind](#kind):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[FindReferencesKind](findreferenceskind "haxe.display.FindReferencesKind")>`
### `[file](#file):[FsPath](fspath "haxe.display.FsPath")`
### `optional[contents](#contents):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/FindReferencesParams.htmlGotoDefinitionResult
=====================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
GotoDefinition
Alias
-----
*alias for* `[haxe.display.Response](response "haxe.display.Response")<[Array](../../array "Array")<[haxe.display.Location](location "haxe.display.Location - Represents a location inside a resource, such as a line inside a text file.")>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/GotoDefinitionResult.htmlGotoTypeDefinitionResult
=========================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
GotoTypeDefinition
Alias
-----
*alias for* `[haxe.display.Response](response "haxe.display.Response")<[Array](../../array "Array")<[haxe.display.Location](location "haxe.display.Location - Represents a location inside a resource, such as a line inside a text file.")>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/GotoTypeDefinitionResult.htmlHaxeContextMemoryResult
========================
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `finalread only[syntaxCache](#syntaxCache):{size:[Int](../../int "Int - The standard Int type.")}`
### `finalread only[moduleCache](#moduleCache):{size:[Int](../../int "Int - The standard Int type."), list:[Array](../../array "Array")<{size:[Int](../../int "Int - The standard Int type."), path:[String](../../string "String - The basic String class."), hasTypes:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")}>}`
### `optionalfinalread only[leaks](#leaks):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<{path:[String](../../string "String - The basic String class."), leaks:[Array](../../array "Array")<{path:[String](../../string "String - The basic String class.")}>}>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HaxeContextMemoryResult.htmlHaxeMemoryResult
=================
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `finalread only[memory](#memory):{totalCache:[Int](../../int "Int - The standard Int type."), nativeLibCache:[Int](../../int "Int - The standard Int type."), haxelibCache:[Int](../../int "Int - The standard Int type."), directoryCache:[Int](../../int "Int - The standard Int type."), contextCache:[Int](../../int "Int - The standard Int type."), additionalSizes:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<{size:[Int](../../int "Int - The standard Int type."), name:[String](../../string "String - The basic String class.")}>>}`
### `finalread only[contexts](#contexts):[Array](../../array "Array")<{size:[Int](../../int "Int - The standard Int type."), context:[HaxeServerContext](haxeservercontext "haxe.display.HaxeServerContext")}>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HaxeMemoryResult.htmlHaxeModuleMemoryResult
=======================
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `finalread only[types](#types):[Array](../../array "Array")<{size:[Int](../../int "Int - The standard Int type."), pos:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Location](location "haxe.display.Location - Represents a location inside a resource, such as a line inside a text file.")>, name:[String](../../string "String - The basic String class."), fields:[Array](../../array "Array")<{size:[Int](../../int "Int - The standard Int type."), pos:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Location](location "haxe.display.Location - Represents a location inside a resource, such as a line inside a text file.")>, name:[String](../../string "String - The basic String class.")}>}>`
### `finalread only[moduleExtra](#moduleExtra):[Int](../../int "Int - The standard Int type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HaxeModuleMemoryResult.htmlHaxeNotificationMethod<TParams>([String](../../string "String - The basic String class."))
===========================================================================================
package [haxe.display](index)
to [String](../../string "String - The basic String class.")
import [haxe.display.Protocol](protocol)
*Available on all platforms*
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HaxeNotificationMethod.htmlHaxeRequestMethod<TParams, TResponse>([String](../../string "String - The basic String class."))
=================================================================================================
package [haxe.display](index)
to [String](../../string "String - The basic String class.")
import [haxe.display.Protocol](protocol)
*Available on all platforms*
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HaxeRequestMethod.htmlHaxeResponseErrorData
======================
package [haxe.display](index)
import [haxe.display.Protocol](protocol)
*Available on all platforms*
Alias
-----
*alias for* `[Array](../../array "Array")<{severity:[haxe.display.HaxeResponseErrorSeverity](haxeresponseerrorseverity "haxe.display.HaxeResponseErrorSeverity"), message:[String](../../string "String - The basic String class."), location:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[haxe.display.Location](location "haxe.display.Location - Represents a location inside a resource, such as a line inside a text file.")>}>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HaxeResponseErrorData.htmlHaxeResponseErrorSeverity([Int](../../int "Int - The standard Int type."))
===========================================================================
package [haxe.display](index)
import [haxe.display.Protocol](protocol)
*Available on all platforms*
Variables
---------
### `inlineread only[Error](#Error):[HaxeResponseErrorSeverity](haxeresponseerrorseverity "haxe.display.HaxeResponseErrorSeverity") = 1`
### `inlineread only[Hint](#Hint):[HaxeResponseErrorSeverity](haxeresponseerrorseverity "haxe.display.HaxeResponseErrorSeverity") = 3`
### `inlineread only[Warning](#Warning):[HaxeResponseErrorSeverity](haxeresponseerrorseverity "haxe.display.HaxeResponseErrorSeverity") = 2`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HaxeResponseErrorSeverity.htmlHaxeServerContext
==================
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `finalread only[signature](#signature):[String](../../string "String - The basic String class.")`
### `finalread only[platform](#platform):[String](../../string "String - The basic String class.")`
### `finalread only[index](#index):[Int](../../int "Int - The standard Int type.")`
### `finalread only[desc](#desc):[String](../../string "String - The basic String class.")`
### `finalread only[defines](#defines):[Array](../../array "Array")<{value:[String](../../string "String - The basic String class."), key:[String](../../string "String - The basic String class.")}>`
### `finalread only[classPaths](#classPaths):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HaxeServerContext.htmlHoverDisplayItemOccurence<T>
=============================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[range](#range):[Range](range "haxe.display.Range - A range in a text document expressed as (1-based) start and end positions.")`
### `optional[moduleTypeFollowed](#moduleTypeFollowed):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
### `optional[moduleType](#moduleType):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
### `[item](#item):[DisplayItem](displayitem "haxe.display.DisplayItem")<T>`
### `optional[expected](#expected):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{type:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>, name:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{name:[String](../../string "String - The basic String class."), kind:[HoverExpectedNameKind](hoverexpectednamekind "haxe.display.HoverExpectedNameKind")}>}>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HoverDisplayItemOccurence.htmlHoverExpectedNameKind([Int](../../int "Int - The standard Int type."))
=======================================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[FunctionArgument](#FunctionArgument):[HoverExpectedNameKind](hoverexpectednamekind "haxe.display.HoverExpectedNameKind") = 0`
### `inlineread only[StructureField](#StructureField):[HoverExpectedNameKind](hoverexpectednamekind "haxe.display.HoverExpectedNameKind") = 1`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HoverExpectedNameKind.htmlHoverResult
============
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Hover
Alias
-----
*alias for* `[haxe.display.Response](response "haxe.display.Response")<[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[haxe.display.HoverDisplayItemOccurence](hoverdisplayitemoccurence "haxe.display.HoverDisplayItemOccurence")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/HoverResult.htmlImportStatus([Int](../../int "Int - The standard Int type."))
==============================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[Imported](#Imported):[ImportStatus](importstatus "haxe.display.ImportStatus") = 0`
This type is already available with it's unqualified name for one of these reasons: - it's a toplevel type - it's imported with an `import` in the current module - it's imported in an `import.hx` file
### `inlineread only[Shadowed](#Shadowed):[ImportStatus](importstatus "haxe.display.ImportStatus") = 2`
A type with the same name is already imported in the module. The fully qualified name has to be used to access it.
### `inlineread only[Unimported](#Unimported):[ImportStatus](importstatus "haxe.display.ImportStatus") = 1`
The type is currently not imported. It can be accessed either with its fully qualified name or by inserting an import.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ImportStatus.htmlInitializeParams
=================
package [haxe.display](index)
import [haxe.display.Protocol](protocol)
*Available on all platforms*
Fields
------
### `optionalfinalread only[supportsResolve](#supportsResolve):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
### `optionalfinalread only[maxCompletionItems](#maxCompletionItems):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Int](../../int "Int - The standard Int type.")>`
The maximum number of completion items to return### `optionalfinalread only[exclude](#exclude):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[String](../../string "String - The basic String class.")>>`
dot paths to exclude from readClassPaths / toplevel completion
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/InitializeParams.htmlInitializeResult
=================
package [haxe.display](index)
import [haxe.display.Protocol](protocol)
*Available on all platforms*
Alias
-----
*alias for* `[haxe.display.Response](response "haxe.display.Response")<{protocolVersion:[haxe.display.Version](version "haxe.display.Version - Represents a semantic version, see https://semver."), methods:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>, haxeVersion:[haxe.display.Version](version "haxe.display.Version - Represents a semantic version, see https://semver.")}>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/InitializeResult.htmlJsonAbstract
=============
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[unops](#unops):[Array](../../array "Array")<[JsonAbstractUnop](jsonabstractunop "haxe.display.JsonAbstractUnop")>`
### `[type](#type):[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `[to](#to):[Array](../../array "Array")<[JsonAbstractCast](jsonabstractcast "haxe.display.JsonAbstractCast")>`
### `[resolve](#resolve):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonClassFieldReference](jsonclassfieldreference "haxe.display.JsonClassFieldReference")>`
### `[impl](#impl):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonClass](jsonclass "haxe.display.JsonClass")>`
### `[from](#from):[Array](../../array "Array")<[JsonAbstractCast](jsonabstractcast "haxe.display.JsonAbstractCast")>`
### `[binops](#binops):[Array](../../array "Array")<[JsonAbstractBinop](jsonabstractbinop "haxe.display.JsonAbstractBinop")>`
### `[array](#array):[JsonClassFields](jsonclassfields "haxe.display.JsonClassFields")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonAbstract.htmlJsonAbstractBinop
==================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[op](#op):[JsonBinop](jsonbinop "haxe.display.JsonBinop")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `[field](#field):[JsonClassFieldReference](jsonclassfieldreference "haxe.display.JsonClassFieldReference")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonAbstractBinop.htmlJsonAbstractCast
=================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[t](#t):[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `[field](#field):[JsonClassFieldReference](jsonclassfieldreference "haxe.display.JsonClassFieldReference")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonAbstractCast.htmlJsonClassField
===============
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[type](#type):[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `[scope](#scope):[JsonClassFieldScope](jsonclassfieldscope "haxe.display.JsonClassFieldScope")`
### `[pos](#pos):[JsonPos](jsonpos "haxe.display.JsonPos")`
### `[params](#params):[JsonTypeParameters](jsontypeparameters "haxe.display.JsonTypeParameters")`
### `[overloads](#overloads):[JsonClassFields](jsonclassfields "haxe.display.JsonClassFields")`
### `[name](#name):[String](../../string "String - The basic String class.")`
### `[meta](#meta):[JsonMetadata](jsonmetadata "haxe.display.JsonMetadata")`
### `[kind](#kind):[JsonFieldKind](jsonfieldkind "haxe.display.JsonFieldKind")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `[isPublic](#isPublic):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[isFinal](#isFinal):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `optional[expr](#expr):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{string:[String](../../string "String - The basic String class.")}>`
### `[doc](#doc):[JsonDoc](jsondoc "haxe.display.JsonDoc")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonClassField.htmlJsonAbstractUnop
=================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[postFix](#postFix):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[op](#op):[JsonUnop](jsonunop "haxe.display.JsonUnop")`
### `[field](#field):[JsonClassFieldReference](jsonclassfieldreference "haxe.display.JsonClassFieldReference")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonAbstractUnop.htmlJsonAnon
=========
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[status](#status):[JsonAnonStatus](jsonanonstatus "haxe.display.JsonAnonStatus")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `[fields](#fields):[JsonClassFields](jsonclassfields "haxe.display.JsonClassFields")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonAnon.htmlJsonAnonStatus<T>
==================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[kind](#kind):[JsonAnonStatusKind](jsonanonstatuskind "haxe.display.JsonAnonStatusKind")<T>`
### `[args](#args):T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonAnonStatus.htmlJsonAnonStatusKind<T>([String](../../string "String - The basic String class."))
=================================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[AAbstractStatics](#AAbstractStatics):[JsonAnonStatusKind](jsonanonstatuskind "haxe.display.JsonAnonStatusKind")<[JsonTypePath](jsontypepath "haxe.display.JsonTypePath")> = "AAbstractStatics"`
### `inlineread only[AClassStatics](#AClassStatics):[JsonAnonStatusKind](jsonanonstatuskind "haxe.display.JsonAnonStatusKind")<[JsonTypePath](jsontypepath "haxe.display.JsonTypePath")> = "AClassStatics"`
### `inlineread only[AClosed](#AClosed):[JsonAnonStatusKind](jsonanonstatuskind "haxe.display.JsonAnonStatusKind")<T> = "AClosed"`
### `inlineread only[AConst](#AConst):[JsonAnonStatusKind](jsonanonstatuskind "haxe.display.JsonAnonStatusKind")<T> = "AConst"`
### `inlineread only[AEnumStatics](#AEnumStatics):[JsonAnonStatusKind](jsonanonstatuskind "haxe.display.JsonAnonStatusKind")<[JsonTypePath](jsontypepath "haxe.display.JsonTypePath")> = "AEnumStatics"`
### `inlineread only[AExtend](#AExtend):[JsonAnonStatusKind](jsonanonstatuskind "haxe.display.JsonAnonStatusKind")<[JsonTypes](jsontypes "haxe.display.JsonTypes")> = "AExtend"`
### `inlineread only[AOpened](#AOpened):[JsonAnonStatusKind](jsonanonstatuskind "haxe.display.JsonAnonStatusKind")<T> = "AOpened"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonAnonStatusKind.htmlJsonBinop<T>
=============
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[kind](#kind):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T>`
### `[args](#args):T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonBinop.htmlJsonBinopKind<T>([String](../../string "String - The basic String class."))
============================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[OpAdd](#OpAdd):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpAdd"`
### `inlineread only[OpAnd](#OpAnd):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpAnd"`
### `inlineread only[OpArrow](#OpArrow):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpArrow"`
### `inlineread only[OpAssign](#OpAssign):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpAssign"`
### `inlineread only[OpAssignOp](#OpAssignOp):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<[JsonBinop](jsonbinop "haxe.display.JsonBinop")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>> = "OpAssignOp"`
### `inlineread only[OpBoolAnd](#OpBoolAnd):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpBoolAnd"`
### `inlineread only[OpBoolOr](#OpBoolOr):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpBoolOr"`
### `inlineread only[OpDiv](#OpDiv):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpDiv"`
### `inlineread only[OpEq](#OpEq):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpEq"`
### `inlineread only[OpGt](#OpGt):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpGt"`
### `inlineread only[OpGte](#OpGte):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpGte"`
### `inlineread only[OpIn](#OpIn):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpIn"`
### `inlineread only[OpInterval](#OpInterval):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpInterval"`
### `inlineread only[OpLt](#OpLt):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpLt"`
### `inlineread only[OpLte](#OpLte):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpLte"`
### `inlineread only[OpMod](#OpMod):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpMod"`
### `inlineread only[OpMult](#OpMult):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpMult"`
### `inlineread only[OpNotEq](#OpNotEq):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpNotEq"`
### `inlineread only[OpOr](#OpOr):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpOr"`
### `inlineread only[OpShl](#OpShl):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpShl"`
### `inlineread only[OpShr](#OpShr):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpShr"`
### `inlineread only[OpSub](#OpSub):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpSub"`
### `inlineread only[OpUShr](#OpUShr):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpUShr"`
### `inlineread only[OpXor](#OpXor):[JsonBinopKind](jsonbinopkind "haxe.display.JsonBinopKind")<T> = "OpXor"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonBinopKind.htmlJsonClass
==========
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[superClass](#superClass):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonTypePathWithParams](jsontypepathwithparams "haxe.display.JsonTypePathWithParams")>`
### `[statics](#statics):[JsonClassFields](jsonclassfields "haxe.display.JsonClassFields")`
### `[overrides](#overrides):[Array](../../array "Array")<[JsonClassFieldReference](jsonclassfieldreference "haxe.display.JsonClassFieldReference")>`
### `[kind](#kind):[JsonClassKind](jsonclasskind "haxe.display.JsonClassKind")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `[isInterface](#isInterface):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[isFinal](#isFinal):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[interfaces](#interfaces):[Array](../../array "Array")<[JsonTypePathWithParams](jsontypepathwithparams "haxe.display.JsonTypePathWithParams")>`
### `[init](#init):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonTExpr](jsontexpr "haxe.display.JsonTExpr")>`
### `[fields](#fields):[JsonClassFields](jsonclassfields "haxe.display.JsonClassFields")`
### `[constructor](#constructor):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonClassField](jsonclassfield "haxe.display.JsonClassField")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonClass.htmlJsonClassFieldReference
========================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Alias
-----
*alias for* `[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonClassFieldReference.htmlJsonClassFieldScope([Int](../../int "Int - The standard Int type."))
=====================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[Constructor](#Constructor):[JsonClassFieldScope](jsonclassfieldscope "haxe.display.JsonClassFieldScope") = 2`
### `inlineread only[Member](#Member):[JsonClassFieldScope](jsonclassfieldscope "haxe.display.JsonClassFieldScope") = 1`
### `inlineread only[Static](#Static):[JsonClassFieldScope](jsonclassfieldscope "haxe.display.JsonClassFieldScope") = 0`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonClassFieldScope.htmlJsonClassFields
================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Alias
-----
*alias for* `[Array](../../array "Array")<[haxe.display.JsonClassField](jsonclassfield "haxe.display.JsonClassField")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonClassFields.htmlJsonClassKind<T>
=================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[kind](#kind):[JsonClassKindKind](jsonclasskindkind "haxe.display.JsonClassKindKind")<T>`
### `[args](#args):T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonClassKind.htmlJsonClassKindKind<T>([String](../../string "String - The basic String class."))
================================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[KAbstractImpl](#KAbstractImpl):[JsonClassKindKind](jsonclasskindkind "haxe.display.JsonClassKindKind")<[JsonTypePath](jsontypepath "haxe.display.JsonTypePath")> = "KAbstractImpl"`
### `inlineread only[KExpr](#KExpr):[JsonClassKindKind](jsonclasskindkind "haxe.display.JsonClassKindKind")<[JsonExpr](jsonexpr "haxe.display.JsonExpr")> = "KExpr"`
### `inlineread only[KExtension](#KExtension):[JsonClassKindKind](jsonclasskindkind "haxe.display.JsonClassKindKind")<[JsonTypePathWithParams](jsontypepathwithparams "haxe.display.JsonTypePathWithParams")> = "KExtension"`
### `inlineread only[KGeneric](#KGeneric):[JsonClassKindKind](jsonclasskindkind "haxe.display.JsonClassKindKind")<T> = "KGeneric"`
### `inlineread only[KGenericBuild](#KGenericBuild):[JsonClassKindKind](jsonclasskindkind "haxe.display.JsonClassKindKind")<T> = "KGenericBuild"`
### `inlineread only[KGenericInstance](#KGenericInstance):[JsonClassKindKind](jsonclasskindkind "haxe.display.JsonClassKindKind")<[JsonTypePathWithParams](jsontypepathwithparams "haxe.display.JsonTypePathWithParams")> = "KGenericInstance"`
### `inlineread only[KMacroType](#KMacroType):[JsonClassKindKind](jsonclasskindkind "haxe.display.JsonClassKindKind")<T> = "KMacroType"`
### `inlineread only[KNormal](#KNormal):[JsonClassKindKind](jsonclasskindkind "haxe.display.JsonClassKindKind")<T> = "KNormal"`
### `inlineread only[KTypeParameter](#KTypeParameter):[JsonClassKindKind](jsonclasskindkind "haxe.display.JsonClassKindKind")<[JsonTypes](jsontypes "haxe.display.JsonTypes")> = "KTypeParameter"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonClassKindKind.htmlJsonDoc
========
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Alias
-----
*alias for* `[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonDoc.htmlJsonEnum
=========
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[constructors](#constructors):[JsonEnumFields](jsonenumfields "haxe.display.JsonEnumFields")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonEnum.htmlJsonEnumFields
===============
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Alias
-----
*alias for* `[Array](../../array "Array")<[haxe.display.JsonEnumField](jsonenumfield "haxe.display.JsonEnumField")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonEnumFields.htmlJsonExpr
=========
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Alias
-----
*alias for* `[haxe.display.JsonTodo](jsontodo "haxe.display.JsonTodo")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonExpr.htmlJsonFieldKind<T>
=================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[kind](#kind):[JsonFieldKindKind](jsonfieldkindkind "haxe.display.JsonFieldKindKind")<T>`
### `[args](#args):T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonFieldKind.htmlJsonFieldKindKind<T>([String](../../string "String - The basic String class."))
================================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[FMethod](#FMethod):[JsonFieldKindKind](jsonfieldkindkind "haxe.display.JsonFieldKindKind")<[JsonMethodKind](jsonmethodkind "haxe.display.JsonMethodKind")> = "FMethod"`
### `inlineread only[FVar](#FVar):[JsonFieldKindKind](jsonfieldkindkind "haxe.display.JsonFieldKindKind")<{write:[JsonVarAccess](jsonvaraccess "haxe.display.JsonVarAccess")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>, read:[JsonVarAccess](jsonvaraccess "haxe.display.JsonVarAccess")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>}> = "FVar"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonFieldKindKind.htmlJsonFunctionArgument
=====================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `optional[value](#value):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{string:[String](../../string "String - The basic String class.")}>`
### `[t](#t):[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `[opt](#opt):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[name](#name):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonFunctionArgument.htmlJsonMethodKind([String](../../string "String - The basic String class."))
==========================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[MethDynamic](#MethDynamic):[JsonMethodKind](jsonmethodkind "haxe.display.JsonMethodKind") = "MethDynamic"`
### `inlineread only[MethInline](#MethInline):[JsonMethodKind](jsonmethodkind "haxe.display.JsonMethodKind") = "MethInline"`
### `inlineread only[MethMacro](#MethMacro):[JsonMethodKind](jsonmethodkind "haxe.display.JsonMethodKind") = "MethMacro"`
### `inlineread only[MethNormal](#MethNormal):[JsonMethodKind](jsonmethodkind "haxe.display.JsonMethodKind") = "MethNormal"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonMethodKind.htmlJsonFunctionSignature
======================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[ret](#ret):[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `[args](#args):[Array](../../array "Array")<[JsonFunctionArgument](jsonfunctionargument "haxe.display.JsonFunctionArgument")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonFunctionSignature.htmlJsonMetadata
=============
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Alias
-----
*alias for* `[Array](../../array "Array")<[haxe.display.JsonMetadataEntry](jsonmetadataentry "haxe.display.JsonMetadataEntry")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonMetadata.htmlJsonMetadataEntry
==================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[pos](#pos):[JsonPos](jsonpos "haxe.display.JsonPos")`
### `[name](#name):[String](../../string "String - The basic String class.")`
### `[args](#args):[Array](../../array "Array")<[JsonExpr](jsonexpr "haxe.display.JsonExpr")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonMetadataEntry.htmlJsonModule
===========
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `finalread only[types](#types):[Array](../../array "Array")<[JsonTypePath](jsontypepath "haxe.display.JsonTypePath")>`
### `finalread only[sign](#sign):[String](../../string "String - The basic String class.")`
### `finalread only[path](#path):[JsonModulePath](jsonmodulepath "haxe.display.JsonModulePath")`
### `finalread only[id](#id):[Int](../../int "Int - The standard Int type.")`
### `finalread only[file](#file):[String](../../string "String - The basic String class.")`
### `finalread only[dependencies](#dependencies):[Array](../../array "Array")<[ModuleId](moduleid "haxe.display.ModuleId")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonModule.htmlJsonModulePath
===============
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
### `[moduleName](#moduleName):[String](../../string "String - The basic String class.")`
### `optional[importStatus](#importStatus):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ImportStatus](importstatus "haxe.display.ImportStatus")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonModulePath.htmlJsonModuleTypeKind<T>([String](../../string "String - The basic String class."))
=================================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[Abstract](#Abstract):[JsonModuleTypeKind](jsonmoduletypekind "haxe.display.JsonModuleTypeKind")<[JsonAbstract](jsonabstract "haxe.display.JsonAbstract")> = "abstract"`
### `inlineread only[Class](#Class):[JsonModuleTypeKind](jsonmoduletypekind "haxe.display.JsonModuleTypeKind")<[JsonClass](jsonclass "haxe.display.JsonClass")> = "class"`
### `inlineread only[Enum](#Enum):[JsonModuleTypeKind](jsonmoduletypekind "haxe.display.JsonModuleTypeKind")<[JsonEnum](jsonenum "haxe.display.JsonEnum")> = "enum"`
### `inlineread only[Typedef](#Typedef):[JsonModuleTypeKind](jsonmoduletypekind "haxe.display.JsonModuleTypeKind")<[JsonTypedef](jsontypedef "haxe.display.JsonTypedef")> = "typedef"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonModuleTypeKind.htmlJsonModuleTypes
================
package [haxe.display](index)
*Available on all platforms*
Alias
-----
*alias for* `[Array](../../array "Array")<[haxe.display.JsonModuleType](jsonmoduletype "haxe.display.JsonModuleType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonModuleTypes.htmlJsonPackagePath
================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonPackagePath.htmlJsonEnumField
==============
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[type](#type):[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `[pos](#pos):[JsonPos](jsonpos "haxe.display.JsonPos")`
### `[params](#params):[JsonTypeParameters](jsontypeparameters "haxe.display.JsonTypeParameters")`
### `[name](#name):[String](../../string "String - The basic String class.")`
### `[meta](#meta):[JsonMetadata](jsonmetadata "haxe.display.JsonMetadata")`
### `[index](#index):[Int](../../int "Int - The standard Int type.")`
### `[doc](#doc):[JsonDoc](jsondoc "haxe.display.JsonDoc")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonEnumField.htmlJsonModuleType<T>
==================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[pos](#pos):[JsonPos](jsonpos "haxe.display.JsonPos")`
### `[params](#params):[JsonTypeParameters](jsontypeparameters "haxe.display.JsonTypeParameters")`
### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
### `[name](#name):[String](../../string "String - The basic String class.")`
### `[moduleName](#moduleName):[String](../../string "String - The basic String class.")`
### `[meta](#meta):[JsonMetadata](jsonmetadata "haxe.display.JsonMetadata")`
### `[kind](#kind):[JsonModuleTypeKind](jsonmoduletypekind "haxe.display.JsonModuleTypeKind")<T>`
### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[doc](#doc):[JsonDoc](jsondoc "haxe.display.JsonDoc")`
### `[args](#args):T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonModuleType.htmlJsonPos
========
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[min](#min):[Int](../../int "Int - The standard Int type.")`
### `[max](#max):[Int](../../int "Int - The standard Int type.")`
### `[file](#file):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonPos.htmlJsonServerFile
===============
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `finalread only[time](#time):[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
### `finalread only[pack](#pack):[String](../../string "String - The basic String class.")`
### `finalread only[moduleName](#moduleName):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
### `finalread only[file](#file):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonServerFile.htmlJsonStaticFieldPath
====================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[typeName](#typeName):[String](../../string "String - The basic String class.")`
### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
### `[moduleName](#moduleName):[String](../../string "String - The basic String class.")`
### `optional[importStatus](#importStatus):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ImportStatus](importstatus "haxe.display.ImportStatus")>`
### `[fieldName](#fieldName):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonStaticFieldPath.htmlJsonTConstant<T>
=================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[kind](#kind):[JsonTConstantKind](jsontconstantkind "haxe.display.JsonTConstantKind")<T>`
### `[args](#args):T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTConstant.htmlJsonTConstantKind<T>([String](../../string "String - The basic String class."))
================================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[TBool](#TBool):[JsonTConstantKind](jsontconstantkind "haxe.display.JsonTConstantKind")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")> = "TBool"`
### `inlineread only[TFloat](#TFloat):[JsonTConstantKind](jsontconstantkind "haxe.display.JsonTConstantKind")<[String](../../string "String - The basic String class.")> = "TFloat"`
### `inlineread only[TInt](#TInt):[JsonTConstantKind](jsontconstantkind "haxe.display.JsonTConstantKind")<[String](../../string "String - The basic String class.")> = "TInt"`
### `inlineread only[TNull](#TNull):[JsonTConstantKind](jsontconstantkind "haxe.display.JsonTConstantKind")<T> = "TNull"`
### `inlineread only[TString](#TString):[JsonTConstantKind](jsontconstantkind "haxe.display.JsonTConstantKind")<[String](../../string "String - The basic String class.")> = "TString"`
### `inlineread only[TSuper](#TSuper):[JsonTConstantKind](jsontconstantkind "haxe.display.JsonTConstantKind")<T> = "TSuper"`
### `inlineread only[TThis](#TThis):[JsonTConstantKind](jsontconstantkind "haxe.display.JsonTConstantKind")<T> = "TThis"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTConstantKind.htmlJsonTExpr
==========
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Alias
-----
*alias for* `[haxe.display.JsonTodo](jsontodo "haxe.display.JsonTodo")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTExpr.htmlJsonTodo
=========
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Alias
-----
*alias for* `[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTodo.htmlJsonType<T>
============
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[kind](#kind):[JsonTypeKind](jsontypekind "haxe.display.JsonTypeKind")<T>`
### `[args](#args):T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonType.htmlJsonTypeKind<T>([String](../../string "String - The basic String class."))
===========================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[TAbstract](#TAbstract):[JsonTypeKind](jsontypekind "haxe.display.JsonTypeKind")<[JsonTypePathWithParams](jsontypepathwithparams "haxe.display.JsonTypePathWithParams")> = "TAbstract"`
### `inlineread only[TAnonymous](#TAnonymous):[JsonTypeKind](jsontypekind "haxe.display.JsonTypeKind")<[JsonAnon](jsonanon "haxe.display.JsonAnon")> = "TAnonymous"`
### `inlineread only[TDynamic](#TDynamic):[JsonTypeKind](jsontypekind "haxe.display.JsonTypeKind")<[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>> = "TDynamic"`
### `inlineread only[TEnum](#TEnum):[JsonTypeKind](jsontypekind "haxe.display.JsonTypeKind")<[JsonTypePathWithParams](jsontypepathwithparams "haxe.display.JsonTypePathWithParams")> = "TEnum"`
### `inlineread only[TFun](#TFun):[JsonTypeKind](jsontypekind "haxe.display.JsonTypeKind")<[JsonFunctionSignature](jsonfunctionsignature "haxe.display.JsonFunctionSignature")> = "TFun"`
### `inlineread only[TInst](#TInst):[JsonTypeKind](jsontypekind "haxe.display.JsonTypeKind")<[JsonTypePathWithParams](jsontypepathwithparams "haxe.display.JsonTypePathWithParams")> = "TInst"`
### `inlineread only[TMono](#TMono):[JsonTypeKind](jsontypekind "haxe.display.JsonTypeKind")<T> = "TMono"`
### `inlineread only[TType](#TType):[JsonTypeKind](jsontypekind "haxe.display.JsonTypeKind")<[JsonTypePathWithParams](jsontypepathwithparams "haxe.display.JsonTypePathWithParams")> = "TType"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTypeKind.htmlJsonTypeParameter
==================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[name](#name):[String](../../string "String - The basic String class.")`
### `[constraints](#constraints):[JsonTypes](jsontypes "haxe.display.JsonTypes")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTypeParameter.htmlJsonTypedef
============
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[type](#type):[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTypedef.htmlJsonTypeParameters
===================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Alias
-----
*alias for* `[Array](../../array "Array")<[haxe.display.JsonTypeParameter](jsontypeparameter "haxe.display.JsonTypeParameter")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTypeParameters.htmlJsonTypePathWithParams
=======================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[path](#path):[JsonTypePath](jsontypepath "haxe.display.JsonTypePath")`
### `[params](#params):[JsonTypes](jsontypes "haxe.display.JsonTypes")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTypePathWithParams.htmlJsonTypes
==========
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Alias
-----
*alias for* `[Array](../../array "Array")<[haxe.display.JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTypes.htmlJsonUnop([String](../../string "String - The basic String class."))
====================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[OpDecrement](#OpDecrement):[JsonUnop](jsonunop "haxe.display.JsonUnop") = "OpDecrement"`
### `inlineread only[OpIncrement](#OpIncrement):[JsonUnop](jsonunop "haxe.display.JsonUnop") = "OpIncrement"`
### `inlineread only[OpNeg](#OpNeg):[JsonUnop](jsonunop "haxe.display.JsonUnop") = "OpNeg"`
### `inlineread only[OpNegBits](#OpNegBits):[JsonUnop](jsonunop "haxe.display.JsonUnop") = "OpNegBits"`
### `inlineread only[OpNot](#OpNot):[JsonUnop](jsonunop "haxe.display.JsonUnop") = "OpNot"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonUnop.htmlJsonVarAccess<T>
=================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[kind](#kind):[JsonVarAccessKind](jsonvaraccesskind "haxe.display.JsonVarAccessKind")<T>`
### `[args](#args):T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonVarAccess.htmlJsonVarAccessKind<T>([String](../../string "String - The basic String class."))
================================================================================
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Variables
---------
### `inlineread only[AccCall](#AccCall):[JsonVarAccessKind](jsonvaraccesskind "haxe.display.JsonVarAccessKind")<T> = "AccCall"`
### `inlineread only[AccCtor](#AccCtor):[JsonVarAccessKind](jsonvaraccesskind "haxe.display.JsonVarAccessKind")<T> = "AccCtor"`
### `inlineread only[AccInline](#AccInline):[JsonVarAccessKind](jsonvaraccesskind "haxe.display.JsonVarAccessKind")<T> = "AccInline"`
### `inlineread only[AccNever](#AccNever):[JsonVarAccessKind](jsonvaraccesskind "haxe.display.JsonVarAccessKind")<T> = "AccNever"`
### `inlineread only[AccNo](#AccNo):[JsonVarAccessKind](jsonvaraccesskind "haxe.display.JsonVarAccessKind")<T> = "AccNo"`
### `inlineread only[AccNormal](#AccNormal):[JsonVarAccessKind](jsonvaraccesskind "haxe.display.JsonVarAccessKind")<T> = "AccNormal"`
### `inlineread only[AccRequire](#AccRequire):[JsonVarAccessKind](jsonvaraccesskind "haxe.display.JsonVarAccessKind")<{require:[String](../../string "String - The basic String class."), message:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>}> = "AccRequire"`
### `inlineread only[AccResolve](#AccResolve):[JsonVarAccessKind](jsonvaraccesskind "haxe.display.JsonVarAccessKind")<T> = "AccResolve"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonVarAccessKind.htmlLocalOrigin([Int](../../int "Int - The standard Int type."))
=============================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[Argument](#Argument):[LocalOrigin](localorigin "haxe.display.LocalOrigin") = 1`
### `inlineread only[CatchVariable](#CatchVariable):[LocalOrigin](localorigin "haxe.display.LocalOrigin") = 4`
### `inlineread only[ForVariable](#ForVariable):[LocalOrigin](localorigin "haxe.display.LocalOrigin") = 2`
### `inlineread only[LocalFunction](#LocalFunction):[LocalOrigin](localorigin "haxe.display.LocalOrigin") = 5`
### `inlineread only[LocalVariable](#LocalVariable):[LocalOrigin](localorigin "haxe.display.LocalOrigin") = 0`
### `inlineread only[PatternVariable](#PatternVariable):[LocalOrigin](localorigin "haxe.display.LocalOrigin") = 3`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/LocalOrigin.htmlKeyword
========
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[name](#name):[KeywordKind](keywordkind "haxe.display.KeywordKind")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Keyword.htmlKeywordKind([String](../../string "String - The basic String class."))
=======================================================================
package [haxe.display](index)
to [String](../../string "String - The basic String class.")
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[Abstract](#Abstract):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "abstract"`
### `inlineread only[Break](#Break):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "break"`
### `inlineread only[Case](#Case):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "case"`
### `inlineread only[Cast](#Cast):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "cast"`
### `inlineread only[Catch](#Catch):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "catch"`
### `inlineread only[Class](#Class):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "class"`
### `inlineread only[Continue](#Continue):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "continue"`
### `inlineread only[Default](#Default):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "default"`
### `inlineread only[Do](#Do):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "do"`
### `inlineread only[Dynamic](#Dynamic):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "dynamic"`
### `inlineread only[Else](#Else):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "else"`
### `inlineread only[Enum](#Enum):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "enum"`
### `inlineread only[Extends](#Extends):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "extends"`
### `inlineread only[Extern](#Extern):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "extern"`
### `inlineread only[Final](#Final):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "final"`
### `inlineread only[For](#For):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "for"`
### `inlineread only[Function](#Function):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "function"`
### `inlineread only[If](#If):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "if"`
### `inlineread only[Implements](#Implements):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "implements"`
### `inlineread only[Import](#Import):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "import"`
### `inlineread only[Inline](#Inline):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "inline"`
### `inlineread only[Interface](#Interface):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "interface"`
### `inlineread only[Macro](#Macro):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "macro"`
### `inlineread only[New](#New):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "new"`
### `inlineread only[Override](#Override):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "override"`
### `inlineread only[Package](#Package):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "package"`
### `inlineread only[Private](#Private):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "private"`
### `inlineread only[Public](#Public):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "public"`
### `inlineread only[Return](#Return):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "return"`
### `inlineread only[Static](#Static):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "static"`
### `inlineread only[Switch](#Switch):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "switch"`
### `inlineread only[Throw](#Throw):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "throw"`
### `inlineread only[Try](#Try):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "try"`
### `inlineread only[Typedef](#Typedef):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "typedef"`
### `inlineread only[Untyped](#Untyped):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "untyped"`
### `inlineread only[Using](#Using):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "using"`
### `inlineread only[Var](#Var):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "var"`
### `inlineread only[While](#While):[KeywordKind](keywordkind "haxe.display.KeywordKind") = "while"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/KeywordKind.htmlLiteral([String](../../string "String - The basic String class."))
===================================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[False](#False):[Literal](literal "haxe.display.Literal") = "false"`
### `inlineread only[Null](#Null):[Literal](literal "haxe.display.Literal") = "null"`
### `inlineread only[This](#This):[Literal](literal "haxe.display.Literal") = "this"`
### `inlineread only[Trace](#Trace):[Literal](literal "haxe.display.Literal") = "trace"`
### `inlineread only[True](#True):[Literal](literal "haxe.display.Literal") = "true"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Literal.htmlLocation
=========
package [haxe.display](index)
import [haxe.display.Position](position)
*Available on all platforms*
Represents a location inside a resource, such as a line inside a text file.
Fields
------
### `[range](#range):[Range](range "haxe.display.Range - A range in a text document expressed as (1-based) start and end positions.")`
### `[file](#file):[FsPath](fspath "haxe.display.FsPath")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Location.htmlMetadata
=========
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[targets](#targets):[Array](../../array "Array")<[MetadataTarget](metadatatarget "haxe.display.MetadataTarget")>`
### `[platforms](#platforms):[Array](../../array "Array")<[Platform](platform "haxe.display.Platform")>`
### `[parameters](#parameters):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
### `[name](#name):[String](../../string "String - The basic String class.")`
### `optional[links](#links):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[String](../../string "String - The basic String class.")>>`
### `[internal](#internal):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[doc](#doc):[JsonDoc](jsondoc "haxe.display.JsonDoc")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Metadata.htmlMetadataTarget([String](../../string "String - The basic String class."))
==========================================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[Abstract](#Abstract):[MetadataTarget](metadatatarget "haxe.display.MetadataTarget") = "TAbstract"`
### `inlineread only[AbstractField](#AbstractField):[MetadataTarget](metadatatarget "haxe.display.MetadataTarget") = "TAbstractField"`
### `inlineread only[AnyField](#AnyField):[MetadataTarget](metadatatarget "haxe.display.MetadataTarget") = "TAnyField"`
### `inlineread only[Class](#Class):[MetadataTarget](metadatatarget "haxe.display.MetadataTarget") = "TClass"`
### `inlineread only[ClassField](#ClassField):[MetadataTarget](metadatatarget "haxe.display.MetadataTarget") = "TClassField"`
### `inlineread only[Enum](#Enum):[MetadataTarget](metadatatarget "haxe.display.MetadataTarget") = "TEnum"`
### `inlineread only[Expr](#Expr):[MetadataTarget](metadatatarget "haxe.display.MetadataTarget") = "TExpr"`
### `inlineread only[TypeParameter](#TypeParameter):[MetadataTarget](metadatatarget "haxe.display.MetadataTarget") = "TTypeParameter"`
### `inlineread only[Typedef](#Typedef):[MetadataTarget](metadatatarget "haxe.display.MetadataTarget") = "TTypedef"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/MetadataTarget.htmlMethods
========
package [haxe.display](index)
import [haxe.display.Protocol](protocol)
*Available on all platforms*
Static variables
----------------
### `staticinlineread only[Initialize](#Initialize):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[InitializeParams](initializeparams "haxe.display.InitializeParams"), [InitializeResult](initializeresult "haxe.display.InitializeResult")> = new HaxeRequestMethod<InitializeParams,InitializeResult>("initialize")`
The initialize request is sent from the client to Haxe to determine the capabilities.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Methods.htmlModule
=======
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[path](#path):[JsonModulePath](jsonmodulepath "haxe.display.JsonModulePath")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Module.htmlModuleParams
=============
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `finalread only[signature](#signature):[String](../../string "String - The basic String class.")`
### `finalread only[path](#path):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ModuleParams.htmlNoData
=======
package [haxe.display](index)
import [haxe.display.Protocol](protocol)
*Available on all platforms*
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/NoData.htmlModuleId
=========
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Fields
------
### `finalread only[sign](#sign):[String](../../string "String - The basic String class.")`
### `finalread only[path](#path):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ModuleId.htmlPackage
========
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[path](#path):[JsonPackagePath](jsonpackagepath "haxe.display.JsonPackagePath")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Package.htmlPatternCompletion<T>
=====================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[isOutermostPattern](#isOutermostPattern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `optional[expectedTypeFollowed](#expectedTypeFollowed):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonType](jsontype "haxe.display.JsonType")<T>>`
### `optional[expectedType](#expectedType):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonType](jsontype "haxe.display.JsonType")<T>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/PatternCompletion.htmlPlatform([String](../../string "String - The basic String class."))
====================================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[Cpp](#Cpp):[Platform](platform "haxe.display.Platform") = "cpp"`
### `inlineread only[Cross](#Cross):[Platform](platform "haxe.display.Platform") = "cross"`
### `inlineread only[Cs](#Cs):[Platform](platform "haxe.display.Platform") = "cs"`
### `inlineread only[Eval](#Eval):[Platform](platform "haxe.display.Platform") = "eval"`
### `inlineread only[Flash](#Flash):[Platform](platform "haxe.display.Platform") = "flash"`
### `inlineread only[Hl](#Hl):[Platform](platform "haxe.display.Platform") = "hl"`
### `inlineread only[Java](#Java):[Platform](platform "haxe.display.Platform") = "java"`
### `inlineread only[Js](#Js):[Platform](platform "haxe.display.Platform") = "js"`
### `inlineread only[Lua](#Lua):[Platform](platform "haxe.display.Platform") = "lua"`
### `inlineread only[Neko](#Neko):[Platform](platform "haxe.display.Platform") = "neko"`
### `inlineread only[Php](#Php):[Platform](platform "haxe.display.Platform") = "php"`
### `inlineread only[Python](#Python):[Platform](platform "haxe.display.Platform") = "python"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Platform.htmlPosition
=========
package [haxe.display](index)
*Available on all platforms*
Position in a text document expressed as 1-based line and character offset.
Fields
------
### `[line](#line):[Int](../../int "Int - The standard Int type.")`
Line position in a document (1-based).### `[character](#character):[Int](../../int "Int - The standard Int type.")`
Character offset on a line in a document (1-based).
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Position.htmlPositionParams
===============
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
General types
Fields
------
### `[offset](#offset):[Int](../../int "Int - The standard Int type.")`
Unicode character offset in the file.### `[file](#file):[FsPath](fspath "haxe.display.FsPath")`
### `optional[contents](#contents):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/PositionParams.htmlRange
======
package [haxe.display](index)
import [haxe.display.Position](position)
*Available on all platforms*
A range in a text document expressed as (1-based) start and end positions.
Fields
------
### `[start](#start):[Position](position "haxe.display.Position - Position in a text document expressed as 1-based line and character offset.")`
The range's start position### `[end](#end):[Position](position "haxe.display.Position - Position in a text document expressed as 1-based line and character offset.")`
The range's end position
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Range.htmlResponse<T>
============
package [haxe.display](index)
import [haxe.display.Protocol](protocol)
*Available on all platforms*
Fields
------
### `optionalfinalread only[timestamp](#timestamp):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")>`
UNIX timestamp at the moment the data was sent.### `optionalfinalread only[timers](#timers):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Timer](timer "haxe.display.Timer")>`
Only sent if `--times` is enabled.### `optionalfinalread only[result](#result):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Response.htmlServerMethods
==============
package [haxe.display](index)
import [haxe.display.Server](server)
*Available on all platforms*
Static variables
----------------
### `staticinlineread only[Configure](#Configure):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[ConfigureParams](configureparams "haxe.display.ConfigureParams"), [Response](response "haxe.display.Response")<[NoData](nodata "haxe.display.NoData")>> = new HaxeRequestMethod<ConfigureParams,Response<NoData>>("server/configure")`
### `staticinlineread only[ContextMemory](#ContextMemory):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[ContextParams](contextparams "haxe.display.ContextParams"), [Response](response "haxe.display.Response")<[HaxeContextMemoryResult](haxecontextmemoryresult "haxe.display.HaxeContextMemoryResult")>> = new HaxeRequestMethod<ContextParams,Response<HaxeContextMemoryResult>>("server/memory/context")`
### `staticinlineread only[Contexts](#Contexts):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[NoData](nodata "haxe.display.NoData"), [Response](response "haxe.display.Response")<[Array](../../array "Array")<[HaxeServerContext](haxeservercontext "haxe.display.HaxeServerContext")>>> = new HaxeRequestMethod<NoData,Response<Array<HaxeServerContext>>>("server/contexts")`
### `staticinlineread only[Files](#Files):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[ContextParams](contextparams "haxe.display.ContextParams"), [Response](response "haxe.display.Response")<[Array](../../array "Array")<[JsonServerFile](jsonserverfile "haxe.display.JsonServerFile")>>> = new HaxeRequestMethod<ContextParams,Response<Array<JsonServerFile>>>("server/files")`
### `staticinlineread only[Invalidate](#Invalidate):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[FileParams](fileparams "haxe.display.FileParams"), [Response](response "haxe.display.Response")<[NoData](nodata "haxe.display.NoData")>> = new HaxeRequestMethod<FileParams,Response<NoData>>("server/invalidate")`
### `staticinlineread only[Memory](#Memory):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[NoData](nodata "haxe.display.NoData"), [Response](response "haxe.display.Response")<[HaxeMemoryResult](haxememoryresult "haxe.display.HaxeMemoryResult")>> = new HaxeRequestMethod<NoData,Response<HaxeMemoryResult>>("server/memory")`
### `staticinlineread only[Module](#Module):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[ModuleParams](moduleparams "haxe.display.ModuleParams"), [Response](response "haxe.display.Response")<[JsonModule](jsonmodule "haxe.display.JsonModule")>> = new HaxeRequestMethod<ModuleParams,Response<JsonModule>>("server/module")`
### `staticinlineread only[ModuleCreated](#ModuleCreated):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[FileParams](fileparams "haxe.display.FileParams"), [Response](response "haxe.display.Response")<[NoData](nodata "haxe.display.NoData")>> = new HaxeRequestMethod<FileParams,Response<NoData>>("server/moduleCreated")`
### `staticinlineread only[ModuleMemory](#ModuleMemory):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[ModuleParams](moduleparams "haxe.display.ModuleParams"), [Response](response "haxe.display.Response")<[HaxeModuleMemoryResult](haxemodulememoryresult "haxe.display.HaxeModuleMemoryResult")>> = new HaxeRequestMethod<ModuleParams,Response<HaxeModuleMemoryResult>>("server/memory/module")`
### `staticinlineread only[Modules](#Modules):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[ContextParams](contextparams "haxe.display.ContextParams"), [Response](response "haxe.display.Response")<[Array](../../array "Array")<[String](../../string "String - The basic String class.")>>> = new HaxeRequestMethod<ContextParams,Response<Array<String>>>("server/modules")`
### `staticinlineread only[ReadClassPaths](#ReadClassPaths):[HaxeRequestMethod](haxerequestmethod "haxe.display.HaxeRequestMethod")<[NoData](nodata "haxe.display.NoData"), [Response](response "haxe.display.Response")<{files:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Int](../../int "Int - The standard Int type.")>}>> = new HaxeRequestMethod<NoData,Response<{ var ?files : Int}>>("server/readClassPaths")`
This request is sent from the client to Haxe to explore the class paths. This effectively creates a cache for toplevel completion.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ServerMethods.htmlJsonTypePath
=============
package [haxe.display](index)
import [haxe.display.JsonModuleTypes](jsonmoduletypes)
*Available on all platforms*
Fields
------
### `[typeName](#typeName):[String](../../string "String - The basic String class.")`
### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
### `[moduleName](#moduleName):[String](../../string "String - The basic String class.")`
### `optional[importStatus](#importStatus):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ImportStatus](importstatus "haxe.display.ImportStatus")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/JsonTypePath.htmlSignatureItem
==============
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[signatures](#signatures):[Array](../../array "Array")<[SignatureInformation](signatureinformation "haxe.display.SignatureInformation")>`
### `[kind](#kind):[SignatureItemKind](signatureitemkind "haxe.display.SignatureItemKind")`
### `[activeSignature](#activeSignature):[Int](../../int "Int - The standard Int type.")`
### `[activeParameter](#activeParameter):[Int](../../int "Int - The standard Int type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/SignatureItem.htmlSignatureHelpParams
====================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
SignatureHelp
Fields
------
### `[wasAutoTriggered](#wasAutoTriggered):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[offset](#offset):[Int](../../int "Int - The standard Int type.")`
Unicode character offset in the file.### `[file](#file):[FsPath](fspath "haxe.display.FsPath")`
### `optional[contents](#contents):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/SignatureHelpParams.htmlSignatureHelpResult
====================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Alias
-----
*alias for* `[haxe.display.Response](response "haxe.display.Response")<[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[haxe.display.SignatureItem](signatureitem "haxe.display.SignatureItem")>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/SignatureHelpResult.htmlSignatureInformation
=====================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[ret](#ret):[JsonType](jsontype "haxe.display.JsonType")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>`
### `optional[documentation](#documentation):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
### `[args](#args):[Array](../../array "Array")<[JsonFunctionArgument](jsonfunctionargument "haxe.display.JsonFunctionArgument")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/SignatureInformation.htmlSignatureItemKind([Int](../../int "Int - The standard Int type."))
===================================================================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Variables
---------
### `inlineread only[ArrayAccess](#ArrayAccess):[SignatureItemKind](signatureitemkind "haxe.display.SignatureItemKind") = 1`
### `inlineread only[Call](#Call):[SignatureItemKind](signatureitemkind "haxe.display.SignatureItemKind") = 0`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/SignatureItemKind.htmlTimer
======
package [haxe.display](index)
import [haxe.display.Protocol](protocol)
*Available on all platforms*
Fields
------
### `finalread only[time](#time):[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
### `optionalfinalread only[percentTotal](#percentTotal):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")>`
### `optionalfinalread only[percentParent](#percentParent):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")>`
### `optionalfinalread only[path](#path):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
### `finalread only[name](#name):[String](../../string "String - The basic String class.")`
### `optionalfinalread only[info](#info):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
### `optionalfinalread only[children](#children):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[Timer](timer "haxe.display.Timer")>>`
### `optionalfinalread only[calls](#calls):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Int](../../int "Int - The standard Int type.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Timer.htmlToplevelCompletion<T>
======================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `optional[expectedTypeFollowed](#expectedTypeFollowed):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonType](jsontype "haxe.display.JsonType")<T>>`
### `optional[expectedType](#expectedType):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[JsonType](jsontype "haxe.display.JsonType")<T>>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/ToplevelCompletion.htmlVersion
========
package [haxe.display](index)
import [haxe.display.Protocol](protocol)
*Available on all platforms*
Represents a semantic version, see https://semver.org/.
Fields
------
### `optionalfinalread only[pre](#pre):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
### `finalread only[patch](#patch):[Int](../../int "Int - The standard Int type.")`
### `finalread only[minor](#minor):[Int](../../int "Int - The standard Int type.")`
### `finalread only[major](#major):[Int](../../int "Int - The standard Int type.")`
### `optionalfinalread only[build](#build):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/Version.htmlBalancedTree<K, V>
===================
package [haxe.ds](index)
implements [IMap](../imap "haxe.IMap")<K, Vextended by [EnumValueMap](enumvaluemap "haxe.ds.EnumValueMap - EnumValueMap allows mapping of enum value keys to arbitrary values.")
*Available on all platforms*
BalancedTree allows key-value mapping with arbitrary keys, as long as they can be ordered. By default, `[Reflect.compare](../../reflect#compare)` is used in the `compare` method, which can be overridden in subclasses.
Operations have a logarithmic average and worst-case cost.
Iteration over keys and values, using `keys` and `iterator` respectively, are in-order.
Constructor
-----------
### `[new](#new)()`
Creates a new BalancedTree, which is initially empty.
Methods
-------
### `[clear](#clear)():[Void](../../void "Void - The standard Void type.")`
Removes all keys from `this` BalancedTree.
### `[copy](#copy)():[BalancedTree](balancedtree "haxe.ds.BalancedTree - BalancedTree allows key-value mapping with arbitrary keys, as long as they can be ordered.")<K, V>`
### `[exists](#exists)(key:K):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `key` is bound to a value.
This method returns true even if `key` is bound to null.
If `key` is null, the result is unspecified.
### `[get](#get)(key:K):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<V>`
Returns the value `key` is bound to.
If `key` is not bound to any value, `null` is returned.
If `key` is null, the result is unspecified.
### `[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<V>`
Iterates over the bound values of `this` BalancedTree.
This operation is performed in-order.
### `inline[keyValueIterator](#keyValueIterator)():[KeyValueIterator](../../keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<K, V>`
### See `[Map.keyValueIterator](../../map#keyValueIterator)`
### `[keys](#keys)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<K>`
Iterates over the keys of `this` BalancedTree.
This operation is performed in-order.
### `[remove](#remove)(key:K):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Removes the current binding of `key`.
If `key` has no binding, `this` BalancedTree is unchanged and false is returned.
Otherwise the binding of `key` is removed and true is returned.
If `key` is null, the result is unspecified.
### `[set](#set)(key:K, value:V):[Void](../../void "Void - The standard Void type.")`
Binds `key` to `value`.
If `key` is already bound to a value, that binding disappears.
If `key` is null, the result is unspecified.
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/BalancedTree.htmlEither<L, R>
=============
package [haxe.ds](index)
*Available on all platforms*
Either represents values which are either of type `L` (Left) or type `R` (Right).
Values
------
### `Left(v:L)`
### `Right(v:R)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/Either.htmlEnumValueMap<K, V>
===================
package [haxe.ds](index)
extends [BalancedTree](balancedtree "haxe.ds.BalancedTree - BalancedTree allows key-value mapping with arbitrary keys, as long as they can be ordered.")
implements [IMap](../imap "haxe.IMap")<K, V*Available on all platforms*
EnumValueMap allows mapping of enum value keys to arbitrary values.
Keys are compared by value and recursively over their parameters. If any parameter is not an enum value, `[Reflect.compare](../../reflect#compare)` is used to compare them.
Constructor
-----------
### `[new](#new)()`
*Available on php, js, neko, cpp, macro, lua, python, hl, flash*
Methods
-------
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/EnumValueMap.htmlGenericCell<T>
===============
package [haxe.ds](index)
import [haxe.ds.GenericStack](genericstack)
*Available on all platforms*
A cell of `[haxe.ds.GenericStack](genericstack#GenericStack)`.
See also:
* <https://haxe.org/manual/std-GenericStack.htmlConstructor
-----------
### `[new](#new)(elt:T, next:[GenericCell](genericcell "haxe.ds.GenericCell - A cell of haxe.")<T>)`
Variables
---------
### `[elt](#elt):T`
### `[next](#next):[GenericCell](genericcell "haxe.ds.GenericCell - A cell of haxe.")<T>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/GenericCell.htmlGenericStack<T>
================
package [haxe.ds](index)
*Available on all platforms*
A stack of elements.
This class is generic, which means one type is generated for each type parameter T on static targets. For example:
* `new [GenericStack](genericstack#GenericStack)<[Int](../../int)>()` generates `GenericStack_Int`
* `new [GenericStack](genericstack#GenericStack)<[String](../../string)>()` generates `GenericStack_String`
The generated name is an implementation detail and should not be relied upon.
See also:
* <https://haxe.org/manual/std-GenericStack.htmlConstructor
-----------
### `[new](#new)()`
Creates a new empty GenericStack.
Variables
---------
### `[head](#head):[GenericCell](genericcell "haxe.ds.GenericCell - A cell of haxe.")<T>`
Methods
-------
### `inline[add](#add)(item:T):[Void](../../void "Void - The standard Void type.")`
Pushes element `item` onto the stack.
### `inline[first](#first)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
Returns the topmost stack element without removing it.
If the stack is empty, null is returned.
### `inline[isEmpty](#isEmpty)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if the stack is empty.
### `[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<T>`
Returns an iterator over the elements of `this` GenericStack.
### `inline[pop](#pop)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
Returns the topmost stack element and removes it.
If the stack is empty, null is returned.
### `[remove](#remove)(v:T):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Removes the first element which is equal to `v` according to the `==` operator.
This method traverses the stack until it finds a matching element and unlinks it, returning true.
If no matching element is found, false is returned.
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
Returns a String representation of `this` GenericStack.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/GenericStack.htmlHashMap<K, V>(HashMapData<K, V>)
=================================
package [haxe.ds](index)
*Available on all platforms*
HashMap allows mapping of hashable objects to arbitrary values.
See `[Map](../../map)` for documentation details.
See also:
* <https://haxe.org/manual/std-Map.htmlMethods
-------
### `inline[clear](#clear)():[Void](../../void "Void - The standard Void type.")`
### See `[Map.clear](../../map#clear)`
### `[copy](#copy)():[HashMap](hashmap "haxe.ds.HashMap - HashMap allows mapping of hashable objects to arbitrary values.")<K, V>`
### See `[Map.copy](../../map#copy)`
### `inline[exists](#exists)(k:K):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Map.exists](../../map#exists)`
### `inline[get](#get)(k:K):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<V>`
### See `[Map.get](../../map#get)`
### `inline[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<V>`
### See `[Map.iterator](../../map#iterator)`
### `inline[keyValueIterator](#keyValueIterator)():[HashMapKeyValueIterator](../iterators/hashmapkeyvalueiterator "haxe.iterators.HashMapKeyValueIterator")<K, V>`
### See `[Map.keyValueIterator](../../map#keyValueIterator)`
### `inline[keys](#keys)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<K>`
### See `[Map.keys](../../map#keys)`
### `inline[remove](#remove)(k:K):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Map.remove](../../map#remove)`
### `inline[set](#set)(k:K, v:V):[Void](../../void "Void - The standard Void type.")`
### See `[Map.set](../../map#set)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/HashMap.htmlIntMap<T>
==========
package [haxe.ds](index)
implements [IMap](../imap "haxe.IMap")<[Int](../../int "Int - The standard Int type."), T*Available on all platforms*
IntMap allows mapping of Int keys to arbitrary values.
See `[Map](../../map)` for documentation details.
See also:
* <https://haxe.org/manual/std-Map.htmlConstructor
-----------
### `[new](#new)()`
Creates a new IntMap.
Methods
-------
### `[clear](#clear)():[Void](../../void "Void - The standard Void type.")`
### See `[Map.clear](../../map#clear)`
### `[copy](#copy)():[IntMap](intmap "haxe.ds.IntMap - IntMap allows mapping of Int keys to arbitrary values.")<T>`
### See `[Map.copy](../../map#copy)`
### `[exists](#exists)(key:[Int](../../int "Int - The standard Int type.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Map.exists](../../map#exists)`
### `[get](#get)(key:[Int](../../int "Int - The standard Int type.")):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
### See `[Map.get](../../map#get)`
### `[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<T>`
### See `[Map.iterator](../../map#iterator)`
(cs, java) Implementation detail: Do not `set()` any new value while iterating, as it may cause a resize, which will break iteration.
### `inline[keyValueIterator](#keyValueIterator)():[KeyValueIterator](../../keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<[Int](../../int "Int - The standard Int type."), T>`
### See `[Map.keyValueIterator](../../map#keyValueIterator)`
### `[keys](#keys)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<[Int](../../int "Int - The standard Int type.")>`
### See `[Map.keys](../../map#keys)`
(cs, java) Implementation detail: Do not `set()` any new value while iterating, as it may cause a resize, which will break iteration.
### `[remove](#remove)(key:[Int](../../int "Int - The standard Int type.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Map.remove](../../map#remove)`
### `[set](#set)(key:[Int](../../int "Int - The standard Int type."), value:T):[Void](../../void "Void - The standard Void type.")`
### See `[Map.set](../../map#set)`
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
### See `[Map.toString](../../map#toString)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/IntMap.htmlListSort
=========
package [haxe.ds](index)
*Available on all platforms*
ListSort provides a stable implementation of merge sort through its `sort` method. It has a O(N.log(N)) complexity and does not require additional memory allocation.
Static methods
--------------
### `staticinline[sort](#sort)<T>(list:T, cmp:(T, T) ‑> [Int](../../int "Int - The standard Int type.")):T`
Sorts List `lst` according to the comparison function `cmp`, where `cmp(x,y)` returns 0 if `x == y`, a positive Int if `x > y` and a negative Int if `x < y`.
This operation modifies List `a` in place and returns its head once modified. The `prev` of the head is set to the tail of the sorted list.
If `list` or `cmp` are null, the result is unspecified.
### `staticinline[sortSingleLinked](#sortSingleLinked)<T>(list:T, cmp:(T, T) ‑> [Int](../../int "Int - The standard Int type.")):T`
Same as `sort` but on single linked list.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/ListSort.htmlObjectMap<K, V>
================
package [haxe.ds](index)
implements [IMap](../imap "haxe.IMap")<K, V*Available on all platforms*
ObjectMap allows mapping of object keys to arbitrary values.
On static targets, the keys are considered to be strong references. Refer to `[haxe.ds.WeakMap](weakmap#WeakMap)` for a weak reference version.
See `[Map](../../map)` for documentation details.
See also:
* <https://haxe.org/manual/std-Map.htmlConstructor
-----------
### `[new](#new)()`
Creates a new ObjectMap.
Methods
-------
### `[clear](#clear)():[Void](../../void "Void - The standard Void type.")`
### See `[Map.clear](../../map#clear)`
### `[copy](#copy)():[ObjectMap](objectmap "haxe.ds.ObjectMap - ObjectMap allows mapping of object keys to arbitrary values.")<K, V>`
*Available on cs, php, js, neko, cpp, macro, java, python, flash*
### See `[Map.copy](../../map#copy)`
### `[copy](#copy)():[ObjectMap](objectmap "haxe.ds.ObjectMap - ObjectMap allows mapping of object keys to arbitrary values.")<A, B>`
*Available on lua*
### See `[Map.copy](../../map#copy)`
### `[copy](#copy)():[ObjectMap](objectmap "haxe.ds.ObjectMap - ObjectMap allows mapping of object keys to arbitrary values.")<K, T>`
*Available on hl*
### See `[Map.copy](../../map#copy)`
### `[exists](#exists)(key:K):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
*Available on cs, php, js, neko, cpp, macro, java, python, hl, flash*
### See `[Map.exists](../../map#exists)`
### `[exists](#exists)(key:A):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
*Available on lua*
### See `[Map.exists](../../map#exists)`
### `[get](#get)(key:K):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<V>`
*Available on cs, php, js, neko, cpp, macro, java, python, flash*
### See `[Map.get](../../map#get)`
### `[get](#get)(key:A):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<B>`
*Available on lua*
### See `[Map.get](../../map#get)`
### `[get](#get)(key:K):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
*Available on hl*
### See `[Map.get](../../map#get)`
### `[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<V>`
*Available on cs, php, js, neko, cpp, macro, java, python, flash*
### See `[Map.iterator](../../map#iterator)`
(cs, java) Implementation detail: Do not `set()` any new value while iterating, as it may cause a resize, which will break iteration.
### `[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<B>`
*Available on lua*
### See `[Map.iterator](../../map#iterator)`
(cs, java) Implementation detail: Do not `set()` any new value while iterating, as it may cause a resize, which will break iteration.
### `[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<T>`
*Available on hl*
### See `[Map.iterator](../../map#iterator)`
(cs, java) Implementation detail: Do not `set()` any new value while iterating, as it may cause a resize, which will break iteration.
### `inline[keyValueIterator](#keyValueIterator)():[KeyValueIterator](../../keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<K, V>`
*Available on cs, php, js, neko, cpp, macro, java, python, flash*
### See `[Map.keyValueIterator](../../map#keyValueIterator)`
### `inline[keyValueIterator](#keyValueIterator)():[KeyValueIterator](../../keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<A, B>`
*Available on lua*
### See `[Map.keyValueIterator](../../map#keyValueIterator)`
### `inline[keyValueIterator](#keyValueIterator)():[KeyValueIterator](../../keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<K, T>`
*Available on hl*
### See `[Map.keyValueIterator](../../map#keyValueIterator)`
### `[keys](#keys)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<K>`
*Available on cs, php, js, neko, cpp, macro, java, python, hl, flash*
### See `[Map.keys](../../map#keys)`
(cs, java) Implementation detail: Do not `set()` any new value while iterating, as it may cause a resize, which will break iteration.
### `[keys](#keys)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<A>`
*Available on lua*
### See `[Map.keys](../../map#keys)`
(cs, java) Implementation detail: Do not `set()` any new value while iterating, as it may cause a resize, which will break iteration.
### `[remove](#remove)(key:K):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
*Available on cs, php, js, neko, cpp, macro, java, python, hl, flash*
### See `[Map.remove](../../map#remove)`
### `[remove](#remove)(key:A):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
*Available on lua*
### See `[Map.remove](../../map#remove)`
### `[set](#set)(key:K, value:V):[Void](../../void "Void - The standard Void type.")`
*Available on cs, php, js, neko, cpp, macro, java, python, flash*
### See `[Map.set](../../map#set)`
### `[set](#set)(key:A, value:B):[Void](../../void "Void - The standard Void type.")`
*Available on lua*
### See `[Map.set](../../map#set)`
### `[set](#set)(key:K, value:T):[Void](../../void "Void - The standard Void type.")`
*Available on hl*
### See `[Map.set](../../map#set)`
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
### See `[Map.toString](../../map#toString)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/ObjectMap.htmlStructExtensionCompletion
==========================
package [haxe.display](index)
import [haxe.display.Display](display)
*Available on all platforms*
Fields
------
### `[isIntersectionType](#isIntersectionType):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/display/StructExtensionCompletion.htmlOption<T>
==========
package [haxe.ds](index)
*Available on all platforms*
An Option is a wrapper type which can either have a value (Some) or not a value (None).
See also:
* <https://haxe.org/manual/std-Option.htmlValues
------
### `Some(v:T)`
### `None`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/Option.htmlReadOnlyArray<T>([Array](../../array "Array")<T>)
==================================================
package [haxe.ds](index)
from [Array](../../array "Array")<T> to [Iterable](../../iterable "Iterable - An Iterable is a data structure which has an iterator() method.")<T*Available on all platforms*
`[ReadOnlyArray](readonlyarray#ReadOnlyArray)` is an abstract over an ordinary `[Array](../../array)` which only exposes APIs that don't modify the instance, hence "read-only".
Note that this doesn't necessarily mean that the instance is *immutable*. Other code holding a reference to the underlying `[Array](../../array)` can still modify it, and the reference can be obtained with a `cast`.
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
The length of `this` Array.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/ReadOnlyArray.htmlStringMap<T>
=============
package [haxe.ds](index)
implements [IMap](../imap "haxe.IMap")<[String](../../string "String - The basic String class."), T*Available on all platforms*
StringMap allows mapping of String keys to arbitrary values.
See `[Map](../../map)` for documentation details.
See also:
* <https://haxe.org/manual/std-Map.htmlConstructor
-----------
### `[new](#new)()`
Creates a new StringMap.
Methods
-------
### `[clear](#clear)():[Void](../../void "Void - The standard Void type.")`
### See `[Map.clear](../../map#clear)`
### `[copy](#copy)():[StringMap](stringmap "haxe.ds.StringMap - StringMap allows mapping of String keys to arbitrary values.")<T>`
### See `[Map.copy](../../map#copy)`
### `[exists](#exists)(key:[String](../../string "String - The basic String class.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Map.exists](../../map#exists)`
### `[get](#get)(key:[String](../../string "String - The basic String class.")):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
### See `[Map.get](../../map#get)`
### `[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<T>`
### See `[Map.iterator](../../map#iterator)`
(cs, java) Implementation detail: Do not `set()` any new value while iterating, as it may cause a resize, which will break iteration.
### `inline[keyValueIterator](#keyValueIterator)():[KeyValueIterator](../../keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<[String](../../string "String - The basic String class."), T>`
### See `[Map.keyValueIterator](../../map#keyValueIterator)`
### `[keys](#keys)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<[String](../../string "String - The basic String class.")>`
### See `[Map.keys](../../map#keys)`
(cs, java) Implementation detail: Do not `set()` any new value while iterating, as it may cause a resize, which will break iteration.
### `[remove](#remove)(key:[String](../../string "String - The basic String class.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Map.remove](../../map#remove)`
### `[set](#set)(key:[String](../../string "String - The basic String class."), value:T):[Void](../../void "Void - The standard Void type.")`
### See `[Map.set](../../map#set)`
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
### See `[Map.toString](../../map#toString)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/StringMap.htmlTreeNode<K, V>
===============
package [haxe.ds](index)
import [haxe.ds.BalancedTree](balancedtree)
*Available on all platforms*
A tree node of `[haxe.ds.BalancedTree](balancedtree#BalancedTree)`.
Constructor
-----------
### `[new](#new)(l:[TreeNode](treenode "haxe.ds.TreeNode - A tree node of haxe.")<K, V>, k:K, v:V, r:[TreeNode](treenode "haxe.ds.TreeNode - A tree node of haxe.")<K, V>, h:[Int](../../int "Int - The standard Int type.") = -1)`
Variables
---------
### `[key](#key):K`
### `[left](#left):[TreeNode](treenode "haxe.ds.TreeNode - A tree node of haxe.")<K, V>`
### `[right](#right):[TreeNode](treenode "haxe.ds.TreeNode - A tree node of haxe.")<K, V>`
### `[value](#value):V`
Methods
-------
### `inline[get_height](#get_height)():[Int](../../int "Int - The standard Int type.")`
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/TreeNode.htmlUnsafeStringMap<T>
===================
package [haxe.ds](index)
implements [IMap](../imap "haxe.IMap")<[String](../../string "String - The basic String class."), T*Available on flash*
This is similar to `[StringMap](stringmap#StringMap)` excepts that it does not sanitize the keys. As a result, it will be faster to access the map for reading, but it might fail with some reserved keys such as `constructor` or `prototype`.
Constructor
-----------
### `[new](#new)()`
Methods
-------
### `inline[clear](#clear)():[Void](../../void "Void - The standard Void type.")`
### `[copy](#copy)():[UnsafeStringMap](unsafestringmap "haxe.ds.UnsafeStringMap - This is similar to StringMap excepts that it does not sanitize the keys.")<T>`
### `inline[exists](#exists)(key:[String](../../string "String - The basic String class.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `inline[get](#get)(key:[String](../../string "String - The basic String class.")):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<T>`
### `inline[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<T>`
### `inline[keyValueIterator](#keyValueIterator)():[KeyValueIterator](../../keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<[String](../../string "String - The basic String class."), T>`
### `inline[keys](#keys)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<[String](../../string "String - The basic String class.")>`
### `[remove](#remove)(key:[String](../../string "String - The basic String class.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `inline[set](#set)(key:[String](../../string "String - The basic String class."), value:T):[Void](../../void "Void - The standard Void type.")`
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/UnsafeStringMap.htmlVector<T>(VectorData<T>)
=========================
package [haxe.ds](index)
*Available on all platforms*
A Vector is a storage of fixed size. It can be faster than Array on some targets, and is never slower.
See also:
* <https://haxe.org/manual/std-vector.htmlStatic methods
--------------
### `static[blit](#blit)<T>(src:[Vector](vector "haxe.ds.Vector - A Vector is a storage of fixed size.")<T>, srcPos:[Int](../../int "Int - The standard Int type."), dest:[Vector](vector "haxe.ds.Vector - A Vector is a storage of fixed size.")<T>, destPos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Copies `length` of elements from `src` Vector, beginning at `srcPos` to `dest` Vector, beginning at `destPos`
The results are unspecified if `length` results in out-of-bounds access, or if `src` or `dest` are null
### `staticinline[fromArrayCopy](#fromArrayCopy)<T>(array:[Array](../../array "Array")<T>):[Vector](vector "haxe.ds.Vector - A Vector is a storage of fixed size.")<T>`
Creates a new Vector by copying the elements of `array`.
This always creates a copy, even on platforms where the internal representation is Array.
The elements are not copied and retain their identity, so `a[i] == [Vector.fromArrayCopy](vector#fromArrayCopy)(a).get(i)` is true for any valid i.
If `array` is null, the result is unspecified.
### `staticinline[fromData](#fromData)<T>(data:VectorData<T>):[Vector](vector "haxe.ds.Vector - A Vector is a storage of fixed size.")<T>`
Initializes a new Vector from `data`.
Since `data` is the internal representation of Vector, this is a no-op.
If `data` is null, the corresponding Vector is also `null`.
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
Returns the length of `this` Vector.
Methods
-------
### `inline[copy](#copy)<T>():[Vector](vector "haxe.ds.Vector - A Vector is a storage of fixed size.")<T>`
Returns a shallow copy of `this` Vector.
The elements are not copied and retain their identity, so `a[i] == a.copy()[i]` is true for any valid `i`. However, `a == a.copy()` is always false.
### `inline[get](#get)(index:[Int](../../int "Int - The standard Int type.")):T`
Returns the value at index `index`.
If `index` is negative or exceeds `this.[length](#length)`, the result is unspecified.
### `[join](#join)<T>(sep:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Returns a string representation of `this` Vector, with `sep` separating each element.
### The result of this operation is equal to `[Std.string](../../std#string)(this[0]) + sep + [Std.string](../../std#string)(this[1]) + sep + ... + sep + [Std.string](../../std#string)(this[this.[length](#length)-1])`
If `this` Vector has length 0, the result is the empty String `""`. If `this` has exactly one element, the result is equal to a call to `[Std.string](../../std#string)(this[0])`.
If `sep` is null, the result is unspecified.
### `inline[map](#map)<S>(f:T ‑> S):[Vector](vector "haxe.ds.Vector - A Vector is a storage of fixed size.")<S>`
Creates a new Vector by applying function `f` to all elements of `this`.
The order of elements is preserved.
If `f` is null, the result is unspecified.
### `inline[set](#set)(index:[Int](../../int "Int - The standard Int type."), val:T):T`
Sets the value at index `index` to `val`.
If `index` is negative or exceeds `this.[length](#length)`, the result is unspecified.
### `inline[sort](#sort)<T>(f:(T, T) ‑> [Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Sorts `this` Vector according to the comparison function `f`, where `f(x,y)` returns 0 if x == y, a positive Int if x > y and a negative Int if x < y.
This operation modifies `this` Vector in place.
The sort operation is not guaranteed to be stable, which means that the order of equal elements may not be retained.
If `f` is null, the result is unspecified.
### `[toArray](#toArray)():[Array](../../array "Array")<T>`
Creates a new Array, copy the content from the Vector to it, and returns it.
### `inline[toData](#toData)():VectorData<T>`
Extracts the data of `this` Vector.
This returns the internal representation type.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/Vector.htmlWeakMap<K, V>
==============
package [haxe.ds](index)
implements [IMap](../imap "haxe.IMap")<K, V*Available on all platforms*
WeakMap allows mapping of object keys to arbitrary values.
The keys are considered to be weak references on static targets.
See `[Map](../../map)` for documentation details.
See also:
* <https://haxe.org/manual/std-Map.htmlConstructor
-----------
### `[new](#new)()`
Creates a new WeakMap.
Methods
-------
### `[clear](#clear)():[Void](../../void "Void - The standard Void type.")`
### See `[Map.clear](../../map#clear)`
### `[copy](#copy)():[WeakMap](weakmap "haxe.ds.WeakMap - WeakMap allows mapping of object keys to arbitrary values.")<K, V>`
### See `[Map.copy](../../map#copy)`
### `[exists](#exists)(key:K):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Map.exists](../../map#exists)`
### `[get](#get)(key:K):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<V>`
### See `[Map.get](../../map#get)`
### `[iterator](#iterator)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<V>`
### See `[Map.iterator](../../map#iterator)`
### `inline[keyValueIterator](#keyValueIterator)():[KeyValueIterator](../../keyvalueiterator "KeyValueIterator - A KeyValueIterator is an Iterator that has a key and a value.")<K, V>`
### See `[Map.keyValueIterator](../../map#keyValueIterator)`
### `[keys](#keys)():[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<K>`
### See `[Map.keys](../../map#keys)`
### `[remove](#remove)(key:K):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Map.remove](../../map#remove)`
### `[set](#set)(key:K, value:V):[Void](../../void "Void - The standard Void type.")`
### See `[Map.set](../../map#set)`
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
### See `[Map.toString](../../map#toString)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/ds/WeakMap.htmlAsVar<T>(T)
============
package [haxe.extern](index)
from T to T
*Available on all platforms*
If this type is used as an argument type, the compiler ensures that argument expressions are bound to a local variable.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/extern/AsVar.htmlEitherType<T1, T2>([Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."))
=============================================================================================================================
package [haxe.extern](index)
from T2, T1 to T2, T1
*Available on all platforms*
An abstract type allowing values to be either of `T1` or `T2` type. Supports implicit casts from/to either types.
It is useful for interfacing with external code on dynamic platforms such as JavaScript or Python.
Otherwise, use of this type is discouraged.
See also:
* <https://haxe.org/manual/lf-externs.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/extern/EitherType.htmlRest<T>([Array](../../array "Array")<T>)
=========================================
package [haxe.extern](index)
*Available on all platforms*
A special abstract type that represents "rest" function argument.
Should be used as a type for the last argument of an extern method, representing that arbitrary number of arguments of given type can be passed to that method.
See also:
* <https://haxe.org/manual/lf-externs.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/extern/Rest.htmlJsonParser
===========
package [haxe.format](index)
*Available on all platforms*
An implementation of JSON parser in Haxe.
This class is used by `[haxe.Json](../json#Json)` when native JSON implementation is not available.
See also:
* <https://haxe.org/manual/std-Json-parsing.htmlStatic methods
--------------
### `staticinline[parse](#parse)(str:[String](../../string "String - The basic String class.")):[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
Parses given JSON-encoded `str` and returns the resulting object.
JSON objects are parsed into anonymous structures and JSON arrays are parsed into `[Array](../../array)<[Dynamic](../../dynamic)>`.
If given `str` is not valid JSON, an exception will be thrown.
If `str` is null, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/format/JsonParser.htmlArrayBufferViewImpl
====================
package [haxe.io](index)
import [haxe.io.ArrayBufferView](arraybufferview)
*Available on cs, php, neko, cpp, macro, java, lua, python, hl, flash*
Constructor
-----------
### `[new](#new)(bytes:[Bytes](bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type."), length:[Int](../../int "Int - The standard Int type."))`
Variables
---------
### `[byteLength](#byteLength):[Int](../../int "Int - The standard Int type.")`
### `[byteOffset](#byteOffset):[Int](../../int "Int - The standard Int type.")`
### `[bytes](#bytes):[Bytes](bytes "haxe.io.Bytes")`
Methods
-------
### `[sub](#sub)(begin:[Int](../../int "Int - The standard Int type."), ?length:[Int](../../int "Int - The standard Int type.")):[ArrayBufferViewImpl](arraybufferviewimpl "haxe.io.ArrayBufferViewImpl")`
### `[subarray](#subarray)(?begin:[Int](../../int "Int - The standard Int type."), ?end:[Int](../../int "Int - The standard Int type.")):[ArrayBufferViewImpl](arraybufferviewimpl "haxe.io.ArrayBufferViewImpl")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/ArrayBufferViewImpl.htmlJsonPrinter
============
package [haxe.format](index)
*Available on all platforms*
An implementation of JSON printer in Haxe.
This class is used by `[haxe.Json](../json#Json)` when native JSON implementation is not available.
See also:
* <https://haxe.org/manual/std-Json-encoding.htmlStatic methods
--------------
### `static[print](#print)(o:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), ?replacer:(key:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), value:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")) ‑> [Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), ?space:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Encodes `o`'s value and returns the resulting JSON string.
If `replacer` is given and is not null, it is used to retrieve actual object to be encoded. The `replacer` function takes two parameters, the key and the value being encoded. Initial key value is an empty string.
If `space` is given and is not null, the result will be pretty-printed. Successive levels will be indented by this string.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/format/JsonPrinter.htmlHttpBase
=========
package [haxe.http](index)
extended by [HttpJs](httpjs "haxe.http.HttpJs"), [HttpFlash](../httpflash "haxe.HttpFlash"), [Http](https://api.haxe.org/sys/Http.html "sys.Http")
*Available on all platforms*
This class can be used to handle Http requests consistently across platforms. There are two intended usages:
* call `[haxe.Http.requestUrl](../http#requestUrl)(url)` and receive the result as a `[String](../../string)` (not available on flash)
* create a `new [haxe.Http](../http#Http)(url)`, register your callbacks for `onData`, `onError` and `onStatus`, then call `request()`.
Constructor
-----------
### `[new](#new)(url:[String](../../string "String - The basic String class."))`
Creates a new Http instance with `url` as parameter.
This does not do a request until `request()` is called.
If `url` is null, the field url must be set to a value before making the call to `request()`, or the result is unspecified.
(Php) Https (SSL) connections are allowed only if the OpenSSL extension is enabled.
Variables
---------
### `read only[responseBytes](#responseBytes):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bytes](../io/bytes "haxe.io.Bytes")>`
### `read only[responseData](#responseData):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
### `[url](#url):[String](../../string "String - The basic String class.")`
The url of `this` request. It is used only by the `request()` method and can be changed in order to send the same request to different target Urls.
Methods
-------
### `[addHeader](#addHeader)(header:[String](../../string "String - The basic String class."), value:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
### `[addParameter](#addParameter)(name:[String](../../string "String - The basic String class."), value:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
### `dynamic[onBytes](#onBytes)(data:[Bytes](../io/bytes "haxe.io.Bytes")):[Void](../../void "Void - The standard Void type.")`
This method is called upon a successful request, with `data` containing the result String.
### The intended usage is to bind it to a custom function: `httpInstance.onBytes = function(data) { // handle result }`
### `dynamic[onData](#onData)(data:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
This method is called upon a successful request, with `data` containing the result String.
### The intended usage is to bind it to a custom function: `httpInstance.onData = function(data) { // handle result }`
### `dynamic[onError](#onError)(msg:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
This method is called upon a request error, with `msg` containing the error description.
### The intended usage is to bind it to a custom function: `httpInstance.onError = function(msg) { // handle error }`
### `dynamic[onStatus](#onStatus)(status:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
This method is called upon a Http status change, with `status` being the new status.
### The intended usage is to bind it to a custom function: `httpInstance.onStatus = function(status) { // handle status }`
### `[request](#request)(?post:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")):[Void](../../void "Void - The standard Void type.")`
Sends `this` Http request to the Url specified by `this.[url](#url)`.
If `post` is true, the request is sent as POST request, otherwise it is sent as GET request.
Depending on the outcome of the request, this method calls the `onStatus()`, `onError()`, `onData()` or `onBytes()` callback functions.
If `this.[url](#url)` is null, the result is unspecified.
If `this.[url](#url)` is an invalid or inaccessible Url, the `onError()` callback function is called.
[js] If `this.[async](#async)` is false, the callback functions are called before this method returns.
### `[setHeader](#setHeader)(name:[String](../../string "String - The basic String class."), value:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
Sets the header identified as `header` to value `value`.
If `header` or `value` are null, the result is unspecified.
This method provides a fluent interface.
### `[setParameter](#setParameter)(name:[String](../../string "String - The basic String class."), value:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
Sets the parameter identified as `param` to value `value`.
If `header` or `value` are null, the result is unspecified.
This method provides a fluent interface.
### `[setPostBytes](#setPostBytes)(data:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bytes](../io/bytes "haxe.io.Bytes")>):[Void](../../void "Void - The standard Void type.")`
Sets the post data of `this` Http request to `data` bytes.
There can only be one post data per request. Subsequent calls to this method or to `setPostData()` overwrite the previously set value.
If `data` is null, the post data is considered to be absent.
This method provides a fluent interface.
### `[setPostData](#setPostData)(data:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>):[Void](../../void "Void - The standard Void type.")`
Sets the post data of `this` Http request to `data` string.
There can only be one post data per request. Subsequent calls to this method or to `setPostBytes()` overwrite the previously set value.
If `data` is null, the post data is considered to be absent.
This method provides a fluent interface.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/http/HttpBase.htmlHttpJs
=======
package [haxe.http](index)
extends [HttpBase](httpbase "haxe.http.HttpBase - This class can be used to handle Http requests consistently across platforms.")
*Available on js*
Static methods
--------------
### `static[requestUrl](#requestUrl)(url:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Makes a synchronous request to `url`.
This creates a new Http instance and makes a GET request by calling its `request([false](../../bool))` method.
If `url` is null, the result is unspecified.
Constructor
-----------
### `[new](#new)(url:[String](../../string "String - The basic String class."))`
Variables
---------
### `[async](#async):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[withCredentials](#withCredentials):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Methods
-------
### `[cancel](#cancel)():[Void](../../void "Void - The standard Void type.")`
Cancels `this` Http request if `request` has been called and a response has not yet been received.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/http/HttpJs.htmlHttpMethod([String](../../string "String - The basic String class."))
======================================================================
package [haxe.http](index)
from [String](../../string "String - The basic String class.") to [String](../../string "String - The basic String class.")
*Available on all platforms*
HTTP defines methods (sometimes referred to as *verbs*) to indicate the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server.
Often, the resource corresponds to a file or the output of an executable residing on the server. The HTTP/1.0 specification defined the `GET`, `POST` and `HEAD` methods and the HTTP/1.1 specification added 5 new methods: `OPTIONS`, `PUT`, `DELETE`, `TRACE` and `CONNECT`.
By being specified in these documents their semantics are well known and can be depended upon. Any client can use any method and the server can be configured to support any combination of methods. If a method is unknown to an intermediate it will be treated as an unsafe and non-idempotent method. There is no limit to the number of methods that can be defined and this allows for future methods to be specified without breaking existing infrastructure.
Variables
---------
### `inlineread only[Connect](#Connect):[HttpMethod](httpmethod "haxe.http.HttpMethod - HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource.") = "CONNECT"`
The `CONNECT` method converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSL-encrypted communication (HTTPS) through an unencrypted HTTP proxy.
### `inlineread only[Delete](#Delete):[HttpMethod](httpmethod "haxe.http.HttpMethod - HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource.") = "DELETE"`
The `DELETE` method deletes the specified resource.
### `inlineread only[Get](#Get):[HttpMethod](httpmethod "haxe.http.HttpMethod - HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource.") = "GET"`
The `GET` method requests a representation of the specified resource.
Requests using `GET` should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.) The W3C has published guidance principles on this distinction, saying, *"Web application design should be informed by the above principles, but also by the relevant limitations."*
See safe methods below.
### `inlineread only[Head](#Head):[HttpMethod](httpmethod "haxe.http.HttpMethod - HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource.") = "HEAD"`
The `HEAD` method asks for a response identical to that of a `GET` request, but without the response body. This is useful for retrieving meta-information written in response headers, without having to transport the entire content.
### `inlineread only[Options](#Options):[HttpMethod](httpmethod "haxe.http.HttpMethod - HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource.") = "OPTIONS"`
The `OPTIONS` method returns the HTTP methods that the server supports for the specified URL. This can be used to check the functionality of a web server by requesting `*` instead of a specific resource.
### `inlineread only[Patch](#Patch):[HttpMethod](httpmethod "haxe.http.HttpMethod - HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource.") = "PATCH"`
The `PATCH` method applies partial modifications to a resource.
### `inlineread only[Post](#Post):[HttpMethod](httpmethod "haxe.http.HttpMethod - HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource.") = "POST"`
The `POST` method requests that the server accept the entity enclosed in the request as a new subordinate of the web resource identified by the URI.
The data `POST`ed might be, for example, an annotation for existing resources; a message for a bulletin board, newsgroup, mailing list, or comment thread; a block of data that is the result of submitting a web form to a data-handling process; or an item to add to a database.
### `inlineread only[Put](#Put):[HttpMethod](httpmethod "haxe.http.HttpMethod - HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource.") = "PUT"`
The `PUT` method requests that the enclosed entity be stored under the supplied URI. If the URI refers to an already existing resource, it is modified; if the URI does not point to an existing resource, then the server can create the resource with that URI.
### `inlineread only[Trace](#Trace):[HttpMethod](httpmethod "haxe.http.HttpMethod - HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource.") = "TRACE"`
The `TRACE` method echoes the received request so that a client can see what (if any) changes or additions have been made by intermediate servers.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/http/HttpMethod.htmlHttpStatus([Int](../../int "Int - The standard Int type."))
============================================================
package [haxe.http](index)
from [Int](../../int "Int - The standard Int type.") to [Int](../../int "Int - The standard Int type.")
*Available on all platforms*
HTTP Request Status
Variables
---------
### `inlineread only[Accepted](#Accepted):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 202`
### `inlineread only[AlreadyReported](#AlreadyReported):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 208`
### `inlineread only[BadGateway](#BadGateway):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 502`
### `inlineread only[BadRequest](#BadRequest):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 400`
### `inlineread only[Conflict](#Conflict):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 409`
### `inlineread only[Continue](#Continue):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 100`
### `inlineread only[Created](#Created):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 201`
### `inlineread only[ExpectationFailed](#ExpectationFailed):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 417`
### `inlineread only[FailedDependency](#FailedDependency):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 424`
### `inlineread only[Forbidden](#Forbidden):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 403`
### `inlineread only[Found](#Found):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 302`
### `inlineread only[GatewayTimeout](#GatewayTimeout):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 504`
### `inlineread only[Gone](#Gone):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 410`
### `inlineread only[HTTPVersionNotSupported](#HTTPVersionNotSupported):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 505`
### `inlineread only[IMUsed](#IMUsed):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 226`
### `inlineread only[ImATeapot](#ImATeapot):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 418`
### `inlineread only[InsufficientStorage](#InsufficientStorage):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 507`
### `inlineread only[InternalServerError](#InternalServerError):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 500`
### `inlineread only[LengthRequired](#LengthRequired):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 411`
### `inlineread only[Locked](#Locked):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 423`
### `inlineread only[LoopDetected](#LoopDetected):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 508`
### `inlineread only[MethodNotAllowed](#MethodNotAllowed):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 405`
### `inlineread only[MisdirectedRequest](#MisdirectedRequest):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 421`
### `inlineread only[MovedPermanently](#MovedPermanently):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 301`
### `inlineread only[MultiStatus](#MultiStatus):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 207`
### `inlineread only[MultipleChoices](#MultipleChoices):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 300`
### `inlineread only[NetworkAuthenticationRequired](#NetworkAuthenticationRequired):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 511`
### `inlineread only[NoContent](#NoContent):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 204`
### `inlineread only[NonAuthoritativeInformation](#NonAuthoritativeInformation):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 203`
### `inlineread only[NotAcceptable](#NotAcceptable):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 406`
### `inlineread only[NotExtended](#NotExtended):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 510`
### `inlineread only[NotFound](#NotFound):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 404`
### `inlineread only[NotImplemented](#NotImplemented):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 501`
### `inlineread only[NotModified](#NotModified):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 304`
### `inlineread only[OK](#OK):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 200`
### `inlineread only[PartialContent](#PartialContent):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 206`
### `inlineread only[PayloadTooLarge](#PayloadTooLarge):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 413`
### `inlineread only[PaymentRequired](#PaymentRequired):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 402`
### `inlineread only[PermanentRedirect](#PermanentRedirect):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 308`
### `inlineread only[PreconditionFailed](#PreconditionFailed):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 412`
### `inlineread only[PreconditionRequired](#PreconditionRequired):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 428`
### `inlineread only[Processing](#Processing):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 102`
### `inlineread only[ProxyAuthenticationRequired](#ProxyAuthenticationRequired):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 407`
### `inlineread only[RangeNotSatisfiable](#RangeNotSatisfiable):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 416`
### `inlineread only[RequestHeaderFieldsTooLarge](#RequestHeaderFieldsTooLarge):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 431`
### `inlineread only[RequestTimeout](#RequestTimeout):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 408`
### `inlineread only[ResetContent](#ResetContent):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 205`
### `inlineread only[SeeOther](#SeeOther):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 303`
### `inlineread only[ServiceUnavailable](#ServiceUnavailable):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 503`
### `inlineread only[SwitchProxy](#SwitchProxy):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 306`
### `inlineread only[SwitchingProtocols](#SwitchingProtocols):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 101`
### `inlineread only[TemporaryRedirect](#TemporaryRedirect):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 307`
### `inlineread only[TooManyRequests](#TooManyRequests):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 429`
### `inlineread only[URITooLong](#URITooLong):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 414`
### `inlineread only[Unauthorized](#Unauthorized):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 401`
### `inlineread only[UnavailableForLegalReasons](#UnavailableForLegalReasons):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 451`
### `inlineread only[UnprocessableEntity](#UnprocessableEntity):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 422`
### `inlineread only[UnsupportedMediaType](#UnsupportedMediaType):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 415`
### `inlineread only[UpgradeRequired](#UpgradeRequired):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 426`
### `inlineread only[UseProxy](#UseProxy):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 305`
### `inlineread only[VariantAlsoNegotiates](#VariantAlsoNegotiates):[HttpStatus](httpstatus "haxe.http.HttpStatus") = 506`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/http/HttpStatus.htmlArrayBufferView([ArrayBufferViewData](arraybufferviewdata "haxe.io.ArrayBufferViewData"))
==========================================================================================
package [haxe.io](index)
*Available on all platforms*
Static methods
--------------
### `static[fromBytes](#fromBytes)(bytes:[Bytes](bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
*Available on cs, php, neko, cpp, macro, java, lua, python, hl, flash*
### `staticinline[fromData](#fromData)(a:[ArrayBufferViewData](arraybufferviewdata "haxe.io.ArrayBufferViewData")):[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
Variables
---------
### `read only[buffer](#buffer):[Bytes](bytes "haxe.io.Bytes")`
### `read only[byteLength](#byteLength):[Int](../../int "Int - The standard Int type.")`
### `read only[byteOffset](#byteOffset):[Int](../../int "Int - The standard Int type.")`
Methods
-------
### `inline[getData](#getData)():[ArrayBufferViewData](arraybufferviewdata "haxe.io.ArrayBufferViewData")`
### `inline[sub](#sub)(begin:[Int](../../int "Int - The standard Int type."), ?length:[Int](../../int "Int - The standard Int type.")):[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
### `inline[subarray](#subarray)(?begin:[Int](../../int "Int - The standard Int type."), ?end:[Int](../../int "Int - The standard Int type.")):[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
*Available on cs, php, neko, cpp, macro, java, lua, python, hl, flash*
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/ArrayBufferView.htmlArrayBufferViewData
====================
package [haxe.io](index)
import [haxe.io.ArrayBufferView](arraybufferview)
*Available on all platforms*
#### flash, lua, php, cs, java, cpp, neko, hl, python, macro
*alias for* `[haxe.io.ArrayBufferViewImpl](arraybufferviewimpl "haxe.io.ArrayBufferViewImpl")`
#### js
*alias for* `[js.lib.ArrayBufferView](https://api.haxe.org/js/lib/ArrayBufferView.html "js.lib.ArrayBufferView - ArrayBufferView is a helper type representing any of the following JavaScript TypedArray types: Documentation ArrayBufferView by Mozilla Contributors, licensed under CC-BY-SA 2.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/ArrayBufferViewData.htmlBufferInput
============
package [haxe.io](index)
extends [Input](input "haxe.io.Input - An Input is an abstract reader.")
*Available on all platforms*
Constructor
-----------
### `[new](#new)(i:[Input](input "haxe.io.Input - An Input is an abstract reader."), buf:[Bytes](bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type.") = 0, available:[Int](../../int "Int - The standard Int type.") = 0)`
Variables
---------
### `[available](#available):[Int](../../int "Int - The standard Int type.")`
### `[buf](#buf):[Bytes](bytes "haxe.io.Bytes")`
### `[i](#i):[Input](input "haxe.io.Input - An Input is an abstract reader.")`
### `[pos](#pos):[Int](../../int "Int - The standard Int type.")`
Methods
-------
### `[refill](#refill)():[Void](../../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/BufferInput.htmlBytesBuffer
============
package [haxe.io](index)
*Available on all platforms*
Constructor
-----------
### `[new](#new)()`
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
The length of the buffer in bytes.
Methods
-------
### `[add](#add)(src:[Bytes](bytes "haxe.io.Bytes")):[Void](../../void "Void - The standard Void type.")`
### `[addByte](#addByte)(byte:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
### `[addBytes](#addBytes)(src:[Bytes](bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
### `[addDouble](#addDouble)(v:[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Void](../../void "Void - The standard Void type.")`
### `[addFloat](#addFloat)(v:[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Void](../../void "Void - The standard Void type.")`
### `[addInt32](#addInt32)(v:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
### `[addInt64](#addInt64)(v:[Int64](../int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")):[Void](../../void "Void - The standard Void type.")`
### `[addString](#addString)(v:[String](../../string "String - The basic String class."), ?encoding:[Encoding](encoding "haxe.io.Encoding")):[Void](../../void "Void - The standard Void type.")`
### `[getBytes](#getBytes)():[Bytes](bytes "haxe.io.Bytes")`
Returns either a copy or a reference of the current bytes. Once called, the buffer should no longer be used.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/BytesBuffer.htmlBytesData
==========
package [haxe.io](index)
*Available on all platforms*
#### cs
*alias for* `[cs.NativeArray](https://api.haxe.org/cs/NativeArray.html "cs.NativeArray")<[cs.UInt8](https://api.haxe.org/cs/UInt8.html "cs.UInt8")>`
#### php
*alias for* `BytesDataAbstract`
#### lua
*alias for* `[Array](../../array "Array")<[Int](../../int "Int - The standard Int type.")>`
#### flash
*alias for* `[flash.utils.ByteArray](https://api.haxe.org/flash/utils/ByteArray.html "flash.utils.ByteArray")`
#### js
*alias for* `[js.lib.ArrayBuffer](https://api.haxe.org/js/lib/ArrayBuffer.html "js.lib.ArrayBuffer")`
#### java
*alias for* `[java.NativeArray](https://api.haxe.org/java/NativeArray.html "java.NativeArray")<[java.Int8](https://api.haxe.org/java/Int8.html "java.Int8")>`
#### neko
*alias for* `[neko.NativeString](https://api.haxe.org/neko/NativeString.html "neko.NativeString")`
#### cpp
*alias for* `[Array](../../array "Array")<[cpp.UInt8](https://api.haxe.org/cpp/UInt8.html "cpp.UInt8")>`
#### macro
*alias for* `NativeBytesDataAbstract`
#### python
*alias for* `[python.Bytearray](https://api.haxe.org/python/Bytearray.html "python.Bytearray")`
#### hl
*alias for* `[haxe.io.BytesDataAbstract](bytesdataabstract "haxe.io.BytesDataAbstract")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/BytesData.htmlBytesDataAbstract([BytesDataImpl](bytesdataimpl "haxe.io.BytesDataImpl"))
==========================================================================
package [haxe.io](index)
import [haxe.io.BytesData](bytesdata)
*Available on hl*
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/BytesDataAbstract.htmlBytesDataImpl
==============
package [haxe.io](index)
import [haxe.io.BytesData](bytesdata)
*Available on hl*
Constructor
-----------
### `[new](#new)(b:[Bytes](https://api.haxe.org/hl/Bytes.html "hl.Bytes"), length:[Int](../../int "Int - The standard Int type."))`
Variables
---------
### `[bytes](#bytes):[Bytes](https://api.haxe.org/hl/Bytes.html "hl.Bytes")`
### `[length](#length):[Int](../../int "Int - The standard Int type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/BytesDataImpl.htmlBytesInput
===========
package [haxe.io](index)
extends [Input](input "haxe.io.Input - An Input is an abstract reader.")
extended by [StringInput](stringinput "haxe.io.StringInput")
*Available on all platforms*
Constructor
-----------
### `[new](#new)(b:[Bytes](bytes "haxe.io.Bytes"), ?pos:[Int](../../int "Int - The standard Int type."), ?len:[Int](../../int "Int - The standard Int type."))`
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
The length of the stream in bytes.
### `[position](#position):[Int](../../int "Int - The standard Int type.")`
The current position in the stream in bytes.
Methods
-------
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/BytesInput.htmlBytesOutput
============
package [haxe.io](index)
extends [Output](output "haxe.io.Output - An Output is an abstract write.")
*Available on all platforms*
Constructor
-----------
### `[new](#new)()`
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
The length of the stream in bytes.
Methods
-------
### `[getBytes](#getBytes)():[Bytes](bytes "haxe.io.Bytes")`
Returns the `[Bytes](bytes#Bytes)` of this output.
This function should not be called more than once on a given `[BytesOutput](bytesoutput#BytesOutput)` instance.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/BytesOutput.htmlEof
====
package [haxe.io](index)
*Available on all platforms*
This exception is raised when reading while data is no longer available in the `[haxe.io.Input](input#Input)`.
Constructor
-----------
### `[new](#new)()`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Eof.htmlInput
======
package [haxe.io](index)
extended by [NativeInput](https://api.haxe.org/cs/io/NativeInput.html "cs.io.NativeInput"), [BufferInput](bufferinput "haxe.io.BufferInput"), [BytesInput](bytesinput "haxe.io.BytesInput"), [NativeInput](https://api.haxe.org/java/io/NativeInput.html "java.io.NativeInput"), [NativeInput](https://api.haxe.org/python/io/NativeInput.html "python.io.NativeInput"), [FileInput](https://api.haxe.org/sys/io/FileInput.html "sys.io.FileInput - Use sys.")
*Available on all platforms*
An Input is an abstract reader. See other classes in the `haxe.io` package for several possible implementations.
All functions which read data throw `[Eof](eof#Eof)` when the end of the stream is reached.
Variables
---------
### `[bigEndian](#bigEndian):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Endianness (word byte order) used when reading numbers.
If `[true](../../bool)`, big-endian is used, otherwise `little-endian` is used.
Methods
-------
### `[close](#close)():[Void](../../void "Void - The standard Void type.")`
Close the input source.
Behaviour while reading after calling this method is unspecified.
### `[read](#read)(nbytes:[Int](../../int "Int - The standard Int type.")):[Bytes](bytes "haxe.io.Bytes")`
Read and return `nbytes` bytes.
### `[readAll](#readAll)(?bufsize:[Int](../../int "Int - The standard Int type.")):[Bytes](bytes "haxe.io.Bytes")`
Read and return all available data.
The `bufsize` optional argument specifies the size of chunks by which data is read. Its default value is target-specific.
### `[readByte](#readByte)():[Int](../../int "Int - The standard Int type.")`
Read and return one byte.
### `[readBytes](#readBytes)(s:[Bytes](bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
Read `len` bytes and write them into `s` to the position specified by `pos`.
Returns the actual length of read data that can be smaller than `len`.
See `readFullBytes` that tries to read the exact amount of specified bytes.
### `[readDouble](#readDouble)():[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Read a 64-bit double-precision floating point number.
Endianness is specified by the `bigEndian` property.
### `[readFloat](#readFloat)():[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
Read a 32-bit floating point number.
Endianness is specified by the `bigEndian` property.
### `[readFullBytes](#readFullBytes)(s:[Bytes](bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Read `len` bytes and write them into `s` to the position specified by `pos`.
Unlike `readBytes`, this method tries to read the exact `len` amount of bytes.
### `[readInt16](#readInt16)():[Int](../../int "Int - The standard Int type.")`
Read a 16-bit signed integer.
Endianness is specified by the `bigEndian` property.
### `[readInt24](#readInt24)():[Int](../../int "Int - The standard Int type.")`
Read a 24-bit signed integer.
Endianness is specified by the `bigEndian` property.
### `[readInt32](#readInt32)():[Int](../../int "Int - The standard Int type.")`
Read a 32-bit signed integer.
Endianness is specified by the `bigEndian` property.
### `[readInt8](#readInt8)():[Int](../../int "Int - The standard Int type.")`
Read a 8-bit signed integer.
### `[readLine](#readLine)():[String](../../string "String - The basic String class.")`
Read a line of text separated by CR and/or LF bytes.
The CR/LF characters are not included in the resulting string.
### `[readString](#readString)(len:[Int](../../int "Int - The standard Int type."), ?encoding:[Encoding](encoding "haxe.io.Encoding")):[String](../../string "String - The basic String class.")`
Read and `len` bytes as a string.
### `[readUInt16](#readUInt16)():[Int](../../int "Int - The standard Int type.")`
Read a 16-bit unsigned integer.
Endianness is specified by the `bigEndian` property.
### `[readUInt24](#readUInt24)():[Int](../../int "Int - The standard Int type.")`
Read a 24-bit unsigned integer.
Endianness is specified by the `bigEndian` property.
### `[readUntil](#readUntil)(end:[Int](../../int "Int - The standard Int type.")):[String](../../string "String - The basic String class.")`
Read a string until a character code specified by `end` is occurred.
The final character is not included in the resulting string.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Input.htmlError
======
package [haxe.io](index)
*Available on all platforms*
The possible IO errors that can occur
Values
------
### `Blocked`
The IO is set into nonblocking mode and some data cannot be read or written
### `Overflow`
An integer value is outside its allowed range
### `OutsideBounds`
An operation on Bytes is outside of its valid range
### `Custom(e:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."))`
Other errors
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Error.htmlFPHelper
=========
package [haxe.io](index)
*Available on all platforms*
Helper that converts between floating point and binary representation. Always works in low-endian encoding.
Static methods
--------------
### `static[doubleToI64](#doubleToI64)(v:[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Int64](../int64 "haxe.Int64 - A cross-platform signed 64-bit integer.")`
Returns an Int64 representing the bytes representation of the double precision IEEE float value. WARNING : for performance reason, the same Int64 value might be reused every time. Copy its low/high values before calling again. We still ensure that this is safe to use in a multithread environment
### `static[floatToI32](#floatToI32)(f:[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Int](../../int "Int - The standard Int type.")`
*Available on cs, php, js, neko, cpp, macro, java, lua, python, flash*
### `static[floatToI32](#floatToI32)(f:[Single](../../single "Single - Single-precision IEEE 32bit float (4-byte).")):[Int](../../int "Int - The standard Int type.")`
*Available on hl*
### `static[i32ToFloat](#i32ToFloat)(i:[Int](../../int "Int - The standard Int type.")):[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
*Available on cs, php, js, neko, cpp, macro, java, lua, python, flash*
### `static[i32ToFloat](#i32ToFloat)(i:[Int](../../int "Int - The standard Int type.")):[Single](../../single "Single - Single-precision IEEE 32bit float (4-byte).")`
*Available on hl*
### `static[i64ToDouble](#i64ToDouble)(low:[Int](../../int "Int - The standard Int type."), high:[Int](../../int "Int - The standard Int type.")):[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/FPHelper.htmlFloat32Array([Float32ArrayData](float32arraydata "haxe.io.Float32ArrayData"))
==============================================================================
package [haxe.io](index)
*Available on all platforms*
Static variables
----------------
### `staticinlineread only[BYTES_PER_ELEMENT](#BYTES_PER_ELEMENT):[Int](../../int "Int - The standard Int type.") = 4`
Static methods
--------------
### `static[fromArray](#fromArray)(a:[Array](../../array "Array")<[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")>, pos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[Float32Array](float32array "haxe.io.Float32Array")`
### `static[fromBytes](#fromBytes)(bytes:[Bytes](bytes "haxe.io.Bytes"), bytePos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[Float32Array](float32array "haxe.io.Float32Array")`
### `static[fromData](#fromData)(d:[Float32ArrayData](float32arraydata "haxe.io.Float32ArrayData")):[Float32Array](float32array "haxe.io.Float32Array")`
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
### `read only[view](#view):[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
Methods
-------
### `inline[get](#get)(index:[Int](../../int "Int - The standard Int type.")):[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
### `inline[getData](#getData)():[Float32ArrayData](float32arraydata "haxe.io.Float32ArrayData")`
### `inline[get_view](#get_view)():[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
### `inline[set](#set)(index:[Int](../../int "Int - The standard Int type."), value:[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
### `inline[sub](#sub)(begin:[Int](../../int "Int - The standard Int type."), ?length:[Int](../../int "Int - The standard Int type.")):[Float32Array](float32array "haxe.io.Float32Array")`
### `inline[subarray](#subarray)(?begin:[Int](../../int "Int - The standard Int type."), ?end:[Int](../../int "Int - The standard Int type.")):[Float32Array](float32array "haxe.io.Float32Array")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Float32Array.htmlFloat32ArrayData
=================
package [haxe.io](index)
import [haxe.io.Float32Array](float32array)
*Available on all platforms*
#### flash, lua, php, cs, java, cpp, neko, hl, python, macro
*alias for* `[haxe.io.ArrayBufferViewData](arraybufferviewdata "haxe.io.ArrayBufferViewData")`
#### js
*alias for* `[js.lib.Float32Array](https://api.haxe.org/js/lib/Float32Array.html "js.lib.Float32Array - The Float32Array typed array represents an array of 32-bit floating point numbers (corresponding to the C float data type) in the platform byte order.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Float32ArrayData.htmlFloat64ArrayData
=================
package [haxe.io](index)
import [haxe.io.Float64Array](float64array)
*Available on all platforms*
#### flash, lua, php, cs, java, cpp, neko, hl, python, macro
*alias for* `[haxe.io.ArrayBufferViewData](arraybufferviewdata "haxe.io.ArrayBufferViewData")`
#### js
*alias for* `[js.lib.Float64Array](https://api.haxe.org/js/lib/Float64Array.html "js.lib.Float64Array - The Float64Array typed array represents an array of 64-bit floating point numbers (corresponding to the C double data type) in the platform byte order.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Float64ArrayData.htmlInt32Array([Int32ArrayData](int32arraydata "haxe.io.Int32ArrayData"))
======================================================================
package [haxe.io](index)
*Available on all platforms*
Static variables
----------------
### `staticinlineread only[BYTES_PER_ELEMENT](#BYTES_PER_ELEMENT):[Int](../../int "Int - The standard Int type.") = 4`
Static methods
--------------
### `static[fromArray](#fromArray)(a:[Array](../../array "Array")<[Int](../../int "Int - The standard Int type.")>, pos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[Int32Array](int32array "haxe.io.Int32Array")`
### `static[fromBytes](#fromBytes)(bytes:[Bytes](bytes "haxe.io.Bytes"), bytePos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[Int32Array](int32array "haxe.io.Int32Array")`
### `static[fromData](#fromData)(d:[Int32ArrayData](int32arraydata "haxe.io.Int32ArrayData")):[Int32Array](int32array "haxe.io.Int32Array")`
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
### `read only[view](#view):[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
Methods
-------
### `inline[get](#get)(index:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
### `inline[getData](#getData)():[Int32ArrayData](int32arraydata "haxe.io.Int32ArrayData")`
### `inline[get_view](#get_view)():[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
### `inline[set](#set)(index:[Int](../../int "Int - The standard Int type."), value:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
### `inline[sub](#sub)(begin:[Int](../../int "Int - The standard Int type."), ?length:[Int](../../int "Int - The standard Int type.")):[Int32Array](int32array "haxe.io.Int32Array")`
### `inline[subarray](#subarray)(?begin:[Int](../../int "Int - The standard Int type."), ?end:[Int](../../int "Int - The standard Int type.")):[Int32Array](int32array "haxe.io.Int32Array")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Int32Array.htmlInt32ArrayData
===============
package [haxe.io](index)
import [haxe.io.Int32Array](int32array)
*Available on all platforms*
#### flash, lua, php, cs, java, cpp, neko, hl, python, macro
*alias for* `[haxe.io.ArrayBufferViewData](arraybufferviewdata "haxe.io.ArrayBufferViewData")`
#### js
*alias for* `[js.lib.Int32Array](https://api.haxe.org/js/lib/Int32Array.html "js.lib.Int32Array - The Int32Array typed array represents an array of twos-complement 32-bit signed integers in the platform byte order.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Int32ArrayData.htmlOutput
=======
package [haxe.io](index)
extended by [NativeOutput](https://api.haxe.org/cs/io/NativeOutput.html "cs.io.NativeOutput"), [BytesOutput](bytesoutput "haxe.io.BytesOutput"), [NativeOutput](https://api.haxe.org/java/io/NativeOutput.html "java.io.NativeOutput"), [NativeOutput](https://api.haxe.org/python/io/NativeOutput.html "python.io.NativeOutput"), [FileOutput](https://api.haxe.org/sys/io/FileOutput.html "sys.io.FileOutput - Use sys.")
*Available on all platforms*
An Output is an abstract write. A specific output implementation will only have to override the `writeByte` and maybe the `write`, `flush` and `close` methods. See `[File.write](https://api.haxe.org/cs/system/io/File.html#write)` and `[String.write](../../string#write)` for two ways of creating an Output.
Variables
---------
### `[bigEndian](#bigEndian):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Endianness (word byte order) used when writing numbers.
If `[true](../../bool)`, big-endian is used, otherwise `little-endian` is used.
Methods
-------
### `[close](#close)():[Void](../../void "Void - The standard Void type.")`
Close the output.
Behaviour while writing after calling this method is unspecified.
### `[flush](#flush)():[Void](../../void "Void - The standard Void type.")`
Flush any buffered data.
### `[prepare](#prepare)(nbytes:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Inform that we are about to write at least `nbytes` bytes.
The underlying implementation can allocate proper working space depending on this information, or simply ignore it. This is not a mandatory call but a tip and is only used in some specific cases.
### `[write](#write)(s:[Bytes](bytes "haxe.io.Bytes")):[Void](../../void "Void - The standard Void type.")`
Write all bytes stored in `s`.
### `[writeByte](#writeByte)(c:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Write one byte.
### `[writeBytes](#writeBytes)(s:[Bytes](bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
Write `len` bytes from `s` starting by position specified by `pos`.
Returns the actual length of written data that can differ from `len`.
See `writeFullBytes` that tries to write the exact amount of specified bytes.
### `[writeDouble](#writeDouble)(x:[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Void](../../void "Void - The standard Void type.")`
Write `x` as 64-bit double-precision floating point number.
Endianness is specified by the `bigEndian` property.
### `[writeFloat](#writeFloat)(x:[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Void](../../void "Void - The standard Void type.")`
Write `x` as 32-bit floating point number.
Endianness is specified by the `bigEndian` property.
### `[writeFullBytes](#writeFullBytes)(s:[Bytes](bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Write `len` bytes from `s` starting by position specified by `pos`.
Unlike `writeBytes`, this method tries to write the exact `len` amount of bytes.
### `[writeInput](#writeInput)(i:[Input](input "haxe.io.Input - An Input is an abstract reader."), ?bufsize:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Read all available data from `i` and write it.
The `bufsize` optional argument specifies the size of chunks by which data is read and written. Its default value is 4096.
### `[writeInt16](#writeInt16)(x:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Write `x` as 16-bit signed integer.
Endianness is specified by the `bigEndian` property.
### `[writeInt24](#writeInt24)(x:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Write `x` as 24-bit signed integer.
Endianness is specified by the `bigEndian` property.
### `[writeInt32](#writeInt32)(x:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Write `x` as 32-bit signed integer.
Endianness is specified by the `bigEndian` property.
### `[writeInt8](#writeInt8)(x:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Write `x` as 8-bit signed integer.
### `[writeString](#writeString)(s:[String](../../string "String - The basic String class."), ?encoding:[Encoding](encoding "haxe.io.Encoding")):[Void](../../void "Void - The standard Void type.")`
Write `s` string.
### `[writeUInt16](#writeUInt16)(x:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Write `x` as 16-bit unsigned integer.
Endianness is specified by the `bigEndian` property.
### `[writeUInt24](#writeUInt24)(x:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
Write `x` as 24-bit unsigned integer.
Endianness is specified by the `bigEndian` property.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Output.htmlPath
=====
package [haxe.io](index)
*Available on all platforms*
This class provides a convenient way of working with paths. It supports the common path formats:
* `directory1/directory2/filename.extension`
* `directory1\directory2\filename.extension`
Static methods
--------------
### `static[addTrailingSlash](#addTrailingSlash)(path:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Adds a trailing slash to `path`, if it does not have one already.
If the last slash in `path` is a backslash, a backslash is appended to `path`.
If the last slash in `path` is a slash, or if no slash is found, a slash is appended to `path`. In particular, this applies to the empty String `""`.
If `path` is `null`, the result is unspecified.
### `static[directory](#directory)(path:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Returns the directory of `path`.
If the directory is `null`, the empty String `""` is returned.
If `path` is `null`, the result is unspecified.
### `static[extension](#extension)(path:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Returns the extension of `path`.
If `path` has no extension, the empty String `""` is returned.
If `path` is `null`, the result is unspecified.
### `static[isAbsolute](#isAbsolute)(path:[String](../../string "String - The basic String class.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns `[true](../../bool)` if the path is an absolute path, and `[false](../../bool)` otherwise.
### `static[join](#join)(paths:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>):[String](../../string "String - The basic String class.")`
Joins all paths in `paths` together.
If `paths` is empty, the empty String `""` is returned. Otherwise the paths are joined with a slash between them.
If `paths` is `null`, the result is unspecified.
### `static[normalize](#normalize)(path:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Normalize a given `path` (e.g. turn `'/usr/local/../lib'` into `'/usr/lib'`).
Also replaces backslashes `\` with slashes `/` and afterwards turns multiple slashes into a single one.
If `path` is `null`, the result is unspecified.
### `static[removeTrailingSlashes](#removeTrailingSlashes)(path:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Removes trailing slashes from `path`.
If `path` does not end with a `/` or `\`, `path` is returned unchanged.
Otherwise the substring of `path` excluding the trailing slashes or backslashes is returned.
If `path` is `null`, the result is unspecified.
### `static[withExtension](#withExtension)(path:[String](../../string "String - The basic String class."), ext:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>):[String](../../string "String - The basic String class.")`
Returns a String representation of `path` where the extension is `ext`.
If `path` has no extension, `ext` is added as extension.
If `path` or `ext` are `null`, the result is unspecified.
### `static[withoutDirectory](#withoutDirectory)(path:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Returns the String representation of `path` without the directory.
If `path` is `null`, the result is unspecified.
### `static[withoutExtension](#withoutExtension)(path:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Returns the String representation of `path` without the file extension.
If `path` is `null`, the result is unspecified.
Constructor
-----------
### `[new](#new)(path:[String](../../string "String - The basic String class."))`
Creates a new `[Path](path#Path)` instance by parsing `path`.
Path information can be retrieved by accessing the `dir`, `file` and `ext` properties.
Variables
---------
### `[backslash](#backslash):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
`[true](../../bool)` if the last directory separator is a backslash, `[false](../../bool)` otherwise.
### `[dir](#dir):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The directory.
This is the leading part of the path that is not part of the file name and the extension.
Does not end with a `/` or `\` separator.
If the path has no directory, the value is `null`.
### `[ext](#ext):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The file extension.
It is separated from the file name by a dot. This dot is not part of the extension.
If the path has no extension, the value is `null`.
### `[file](#file):[String](../../string "String - The basic String class.")`
The file name.
This is the part of the part between the directory and the extension.
If there is no file name, e.g. for `".htaccess"` or `"/dir/"`, the value is the empty String `""`.
Methods
-------
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
Returns a String representation of `this` path.
If `this.[backslash](#backslash)` is `[true](../../bool)`, backslash is used as directory separator, otherwise slash is used. This only affects the separator between `this.[dir](#dir)` and `this.[file](#file)`.
If `this.[directory](#directory)` or `this.[extension](#extension)` is `null`, their representation is the empty String `""`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Path.htmlFloat64Array([Float64ArrayData](float64arraydata "haxe.io.Float64ArrayData"))
==============================================================================
package [haxe.io](index)
*Available on all platforms*
Static variables
----------------
### `staticinlineread only[BYTES_PER_ELEMENT](#BYTES_PER_ELEMENT):[Int](../../int "Int - The standard Int type.") = 8`
Static methods
--------------
### `static[fromArray](#fromArray)(a:[Array](../../array "Array")<[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")>, pos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[Float64Array](float64array "haxe.io.Float64Array")`
### `static[fromBytes](#fromBytes)(bytes:[Bytes](bytes "haxe.io.Bytes"), bytePos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[Float64Array](float64array "haxe.io.Float64Array")`
### `static[fromData](#fromData)(d:[Float64ArrayData](float64arraydata "haxe.io.Float64ArrayData")):[Float64Array](float64array "haxe.io.Float64Array")`
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
### `read only[view](#view):[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
Methods
-------
### `inline[get](#get)(index:[Int](../../int "Int - The standard Int type.")):[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
### `inline[getData](#getData)():[Float64ArrayData](float64arraydata "haxe.io.Float64ArrayData")`
### `inline[get_view](#get_view)():[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
### `inline[set](#set)(index:[Int](../../int "Int - The standard Int type."), value:[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")):[Float](../../float "Float - The standard Float type, this is a double-precision IEEE 64bit float.")`
### `inline[sub](#sub)(begin:[Int](../../int "Int - The standard Int type."), ?length:[Int](../../int "Int - The standard Int type.")):[Float64Array](float64array "haxe.io.Float64Array")`
### `inline[subarray](#subarray)(?begin:[Int](../../int "Int - The standard Int type."), ?end:[Int](../../int "Int - The standard Int type.")):[Float64Array](float64array "haxe.io.Float64Array")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Float64Array.htmlMime([String](../../string "String - The basic String class."))
================================================================
package [haxe.io](index)
from [String](../../string "String - The basic String class.") to [String](../../string "String - The basic String class.")
*Available on all platforms*
HTML MimeType Enum
See also:
* <http://www.sitepoint.com/web-foundations/mime-types-complete-list>/
Variables
---------
### `inlineread only[ApplicationAcad](#ApplicationAcad):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/acad"`
### `inlineread only[ApplicationArj](#ApplicationArj):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/arj"`
### `inlineread only[ApplicationBase64](#ApplicationBase64):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/base64"`
### `inlineread only[ApplicationBinhex](#ApplicationBinhex):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/binhex"`
### `inlineread only[ApplicationBook](#ApplicationBook):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/book"`
### `inlineread only[ApplicationCdf](#ApplicationCdf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/cdf"`
### `inlineread only[ApplicationClariscad](#ApplicationClariscad):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/clariscad"`
### `inlineread only[ApplicationCommonground](#ApplicationCommonground):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/commonground"`
### `inlineread only[ApplicationDrafting](#ApplicationDrafting):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/drafting"`
### `inlineread only[ApplicationDsptype](#ApplicationDsptype):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/dsptype"`
### `inlineread only[ApplicationDxf](#ApplicationDxf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/dxf"`
### `inlineread only[ApplicationEnvoy](#ApplicationEnvoy):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/envoy"`
### `inlineread only[ApplicationExcel](#ApplicationExcel):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/excel"`
### `inlineread only[ApplicationFreeloader](#ApplicationFreeloader):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/freeloader"`
### `inlineread only[ApplicationFuturesplash](#ApplicationFuturesplash):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/futuresplash"`
### `inlineread only[ApplicationGnutar](#ApplicationGnutar):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/gnutar"`
### `inlineread only[ApplicationGroupwise](#ApplicationGroupwise):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/groupwise"`
### `inlineread only[ApplicationHlp](#ApplicationHlp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/hlp"`
### `inlineread only[ApplicationHta](#ApplicationHta):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/hta"`
### `inlineread only[ApplicationIDeas](#ApplicationIDeas):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/i-deas"`
### `inlineread only[ApplicationIges](#ApplicationIges):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/iges"`
### `inlineread only[ApplicationInf](#ApplicationInf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/inf"`
### `inlineread only[ApplicationJava](#ApplicationJava):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/java"`
### `inlineread only[ApplicationJavaByteCode](#ApplicationJavaByteCode):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/java-byte-code"`
### `inlineread only[ApplicationJavascript](#ApplicationJavascript):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/javascript"`
### `inlineread only[ApplicationJson](#ApplicationJson):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/json"`
### `inlineread only[ApplicationLzx](#ApplicationLzx):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/lzx"`
### `inlineread only[ApplicationMacBinary](#ApplicationMacBinary):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/mac-binary"`
### `inlineread only[ApplicationMacCompactpro](#ApplicationMacCompactpro):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/mac-compactpro"`
### `inlineread only[ApplicationMacbinary](#ApplicationMacbinary):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/macbinary"`
### `inlineread only[ApplicationMarc](#ApplicationMarc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/marc"`
### `inlineread only[ApplicationMbedlet](#ApplicationMbedlet):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/mbedlet"`
### `inlineread only[ApplicationMcad](#ApplicationMcad):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/mcad"`
### `inlineread only[ApplicationMime](#ApplicationMime):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/mime"`
### `inlineread only[ApplicationMspowerpoint](#ApplicationMspowerpoint):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/mspowerpoint"`
### `inlineread only[ApplicationMsword](#ApplicationMsword):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/msword"`
### `inlineread only[ApplicationMswrite](#ApplicationMswrite):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/mswrite"`
### `inlineread only[ApplicationNetmc](#ApplicationNetmc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/netmc"`
### `inlineread only[ApplicationOctetStream](#ApplicationOctetStream):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/octet-stream"`
### `inlineread only[ApplicationOda](#ApplicationOda):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/oda"`
### `inlineread only[ApplicationPdf](#ApplicationPdf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/pdf"`
### `inlineread only[ApplicationPkcs10](#ApplicationPkcs10):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/pkcs10"`
### `inlineread only[ApplicationPkcs12](#ApplicationPkcs12):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/pkcs-12"`
### `inlineread only[ApplicationPkcs7Mime](#ApplicationPkcs7Mime):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/pkcs7-mime"`
### `inlineread only[ApplicationPkcs7Signature](#ApplicationPkcs7Signature):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/pkcs7-signature"`
### `inlineread only[ApplicationPkcsCrl](#ApplicationPkcsCrl):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/pkcs-crl"`
### `inlineread only[ApplicationPkixCert](#ApplicationPkixCert):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/pkix-cert"`
### `inlineread only[ApplicationPostscript](#ApplicationPostscript):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/postscript"`
### `inlineread only[ApplicationPro_eng](#ApplicationPro_eng):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/pro_eng"`
### `inlineread only[ApplicationRingingTones](#ApplicationRingingTones):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/ringing-tones"`
### `inlineread only[ApplicationRtf](#ApplicationRtf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/rtf"`
### `inlineread only[ApplicationSdp](#ApplicationSdp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/sdp"`
### `inlineread only[ApplicationSea](#ApplicationSea):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/sea"`
### `inlineread only[ApplicationSet](#ApplicationSet):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/set"`
### `inlineread only[ApplicationSla](#ApplicationSla):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/sla"`
### `inlineread only[ApplicationSmil](#ApplicationSmil):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/smil"`
### `inlineread only[ApplicationSolids](#ApplicationSolids):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/solids"`
### `inlineread only[ApplicationSounder](#ApplicationSounder):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/sounder"`
### `inlineread only[ApplicationStep](#ApplicationStep):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/step"`
### `inlineread only[ApplicationStreamingmedia](#ApplicationStreamingmedia):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/streamingmedia"`
### `inlineread only[ApplicationToolbook](#ApplicationToolbook):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/toolbook"`
### `inlineread only[ApplicationVda](#ApplicationVda):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vda"`
### `inlineread only[ApplicationVndFdf](#ApplicationVndFdf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.fdf"`
### `inlineread only[ApplicationVndHpHpgl](#ApplicationVndHpHpgl):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.hp-hpgl"`
### `inlineread only[ApplicationVndHpPcl](#ApplicationVndHpPcl):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.hp-pcl"`
### `inlineread only[ApplicationVndMsPkiCertstore](#ApplicationVndMsPkiCertstore):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.ms-pki.certstore"`
### `inlineread only[ApplicationVndMsPkiPko](#ApplicationVndMsPkiPko):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.ms-pki.pko"`
### `inlineread only[ApplicationVndMsPkiSeccat](#ApplicationVndMsPkiSeccat):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.ms-pki.seccat"`
### `inlineread only[ApplicationVndMsPowerpoint](#ApplicationVndMsPowerpoint):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.ms-powerpoint"`
### `inlineread only[ApplicationVndMsProject](#ApplicationVndMsProject):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.ms-project"`
### `inlineread only[ApplicationVndNokiaConfigurationMessage](#ApplicationVndNokiaConfigurationMessage):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.nokia.configuration-message"`
### `inlineread only[ApplicationVndRnRealmedia](#ApplicationVndRnRealmedia):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.rn-realmedia"`
### `inlineread only[ApplicationVndRnRealplayer](#ApplicationVndRnRealplayer):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.rn-realplayer"`
### `inlineread only[ApplicationVndWapWmlc](#ApplicationVndWapWmlc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.wap.wmlc"`
### `inlineread only[ApplicationVndWapWmlscriptc](#ApplicationVndWapWmlscriptc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.wap.wmlscriptc"`
### `inlineread only[ApplicationVndXara](#ApplicationVndXara):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vnd.xara"`
### `inlineread only[ApplicationVocaltecMediaDesc](#ApplicationVocaltecMediaDesc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vocaltec-media-desc"`
### `inlineread only[ApplicationVocaltecMediaFile](#ApplicationVocaltecMediaFile):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/vocaltec-media-file"`
### `inlineread only[ApplicationWordperfect](#ApplicationWordperfect):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/wordperfect"`
### `inlineread only[ApplicationWordperfect60](#ApplicationWordperfect60):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/wordperfect6.0"`
### `inlineread only[ApplicationWordperfect61](#ApplicationWordperfect61):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/wordperfect6.1"`
### `inlineread only[ApplicationX123](#ApplicationX123):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-123"`
### `inlineread only[ApplicationXAim](#ApplicationXAim):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-aim"`
### `inlineread only[ApplicationXAuthorwareBin](#ApplicationXAuthorwareBin):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-authorware-bin"`
### `inlineread only[ApplicationXAuthorwareMap](#ApplicationXAuthorwareMap):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-authorware-map"`
### `inlineread only[ApplicationXAuthorwareSeg](#ApplicationXAuthorwareSeg):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-authorware-seg"`
### `inlineread only[ApplicationXBcpio](#ApplicationXBcpio):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-bcpio"`
### `inlineread only[ApplicationXBinary](#ApplicationXBinary):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-binary"`
### `inlineread only[ApplicationXBsh](#ApplicationXBsh):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-bsh"`
### `inlineread only[ApplicationXBytecodeElisp](#ApplicationXBytecodeElisp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-bytecode.elisp (compiled elisp)"`
### `inlineread only[ApplicationXBytecodePython](#ApplicationXBytecodePython):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-bytecode.python"`
### `inlineread only[ApplicationXBzip](#ApplicationXBzip):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-bzip"`
### `inlineread only[ApplicationXBzip2](#ApplicationXBzip2):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-bzip2"`
### `inlineread only[ApplicationXCdf](#ApplicationXCdf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-cdf"`
### `inlineread only[ApplicationXCdlink](#ApplicationXCdlink):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-cdlink"`
### `inlineread only[ApplicationXChat](#ApplicationXChat):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-chat"`
### `inlineread only[ApplicationXCmuRaster](#ApplicationXCmuRaster):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-cmu-raster"`
### `inlineread only[ApplicationXCocoa](#ApplicationXCocoa):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-cocoa"`
### `inlineread only[ApplicationXCompress](#ApplicationXCompress):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-compress"`
### `inlineread only[ApplicationXCompressed](#ApplicationXCompressed):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-compressed"`
### `inlineread only[ApplicationXConference](#ApplicationXConference):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-conference"`
### `inlineread only[ApplicationXCpio](#ApplicationXCpio):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-cpio"`
### `inlineread only[ApplicationXCsh](#ApplicationXCsh):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-csh"`
### `inlineread only[ApplicationXDeepv](#ApplicationXDeepv):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-deepv"`
### `inlineread only[ApplicationXDirector](#ApplicationXDirector):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-director"`
### `inlineread only[ApplicationXDvi](#ApplicationXDvi):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-dvi"`
### `inlineread only[ApplicationXEnvoy](#ApplicationXEnvoy):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-envoy"`
### `inlineread only[ApplicationXEsrehber](#ApplicationXEsrehber):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-esrehber"`
### `inlineread only[ApplicationXFreelance](#ApplicationXFreelance):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-freelance"`
### `inlineread only[ApplicationXGsp](#ApplicationXGsp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-gsp"`
### `inlineread only[ApplicationXGss](#ApplicationXGss):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-gss"`
### `inlineread only[ApplicationXGtar](#ApplicationXGtar):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-gtar"`
### `inlineread only[ApplicationXGzip](#ApplicationXGzip):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-gzip"`
### `inlineread only[ApplicationXHdf](#ApplicationXHdf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-hdf"`
### `inlineread only[ApplicationXHelpfile](#ApplicationXHelpfile):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-helpfile"`
### `inlineread only[ApplicationXHttpdImap](#ApplicationXHttpdImap):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-httpd-imap"`
### `inlineread only[ApplicationXIma](#ApplicationXIma):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-ima"`
### `inlineread only[ApplicationXInternettSignup](#ApplicationXInternettSignup):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-internett-signup"`
### `inlineread only[ApplicationXInventor](#ApplicationXInventor):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-inventor"`
### `inlineread only[ApplicationXIp2](#ApplicationXIp2):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-ip2"`
### `inlineread only[ApplicationXJavaClass](#ApplicationXJavaClass):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-java-class"`
### `inlineread only[ApplicationXJavaCommerce](#ApplicationXJavaCommerce):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-java-commerce"`
### `inlineread only[ApplicationXKoan](#ApplicationXKoan):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-koan"`
### `inlineread only[ApplicationXKsh](#ApplicationXKsh):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-ksh"`
### `inlineread only[ApplicationXLatex](#ApplicationXLatex):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-latex"`
### `inlineread only[ApplicationXLisp](#ApplicationXLisp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-lisp"`
### `inlineread only[ApplicationXLivescreen](#ApplicationXLivescreen):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-livescreen"`
### `inlineread only[ApplicationXLotus](#ApplicationXLotus):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-lotus"`
### `inlineread only[ApplicationXLotusscreencam](#ApplicationXLotusscreencam):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-lotusscreencam"`
### `inlineread only[ApplicationXMacbinary](#ApplicationXMacbinary):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-macbinary"`
### `inlineread only[ApplicationXMagicCapPackage10](#ApplicationXMagicCapPackage10):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-magic-cap-package-1.0"`
### `inlineread only[ApplicationXMif](#ApplicationXMif):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-mif"`
### `inlineread only[ApplicationXMixTransfer](#ApplicationXMixTransfer):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-mix-transfer"`
### `inlineread only[ApplicationXMplayer2](#ApplicationXMplayer2):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-mplayer2"`
### `inlineread only[ApplicationXNaviAnimation](#ApplicationXNaviAnimation):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-navi-animation"`
### `inlineread only[ApplicationXNavidoc](#ApplicationXNavidoc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-navidoc"`
### `inlineread only[ApplicationXNavimap](#ApplicationXNavimap):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-navimap"`
### `inlineread only[ApplicationXNetcdf](#ApplicationXNetcdf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-netcdf"`
### `inlineread only[ApplicationXNewtonCompatiblePkg](#ApplicationXNewtonCompatiblePkg):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-newton-compatible-pkg"`
### `inlineread only[ApplicationXNokia9000CommunicatorAddOnSoftware](#ApplicationXNokia9000CommunicatorAddOnSoftware):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-nokia-9000-communicator-add-on-software"`
### `inlineread only[ApplicationXOmc](#ApplicationXOmc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-omc"`
### `inlineread only[ApplicationXOmcdatamaker](#ApplicationXOmcdatamaker):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-omcdatamaker"`
### `inlineread only[ApplicationXOmcregerator](#ApplicationXOmcregerator):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-omcregerator"`
### `inlineread only[ApplicationXPagemaker](#ApplicationXPagemaker):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-pagemaker"`
### `inlineread only[ApplicationXPixclscript](#ApplicationXPixclscript):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-pixclscript"`
### `inlineread only[ApplicationXPkcs7Certificates](#ApplicationXPkcs7Certificates):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-pkcs7-certificates"`
### `inlineread only[ApplicationXPkcs7Certreqresp](#ApplicationXPkcs7Certreqresp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-pkcs7-certreqresp"`
### `inlineread only[ApplicationXPkcs7Signature](#ApplicationXPkcs7Signature):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-pkcs7-signature"`
### `inlineread only[ApplicationXPortableAnymap](#ApplicationXPortableAnymap):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-portable-anymap"`
### `inlineread only[ApplicationXProject](#ApplicationXProject):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-project"`
### `inlineread only[ApplicationXQpro](#ApplicationXQpro):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-qpro"`
### `inlineread only[ApplicationXSeelogo](#ApplicationXSeelogo):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-seelogo"`
### `inlineread only[ApplicationXShockwaveFlash](#ApplicationXShockwaveFlash):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-shockwave-flash"`
### `inlineread only[ApplicationXSit](#ApplicationXSit):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-sit"`
### `inlineread only[ApplicationXSprite](#ApplicationXSprite):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-sprite"`
### `inlineread only[ApplicationXSv4cpio](#ApplicationXSv4cpio):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-sv4cpio"`
### `inlineread only[ApplicationXSv4crc](#ApplicationXSv4crc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-sv4crc"`
### `inlineread only[ApplicationXTar](#ApplicationXTar):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-tar"`
### `inlineread only[ApplicationXTbook](#ApplicationXTbook):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-tbook"`
### `inlineread only[ApplicationXTcl](#ApplicationXTcl):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-tcl"`
### `inlineread only[ApplicationXTex](#ApplicationXTex):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-tex"`
### `inlineread only[ApplicationXTexinfo](#ApplicationXTexinfo):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-texinfo"`
### `inlineread only[ApplicationXTroff](#ApplicationXTroff):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-troff"`
### `inlineread only[ApplicationXTroffMan](#ApplicationXTroffMan):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-troff-man"`
### `inlineread only[ApplicationXTroffMe](#ApplicationXTroffMe):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-troff-me"`
### `inlineread only[ApplicationXTroffMs](#ApplicationXTroffMs):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-troff-ms"`
### `inlineread only[ApplicationXTroffMsvideo](#ApplicationXTroffMsvideo):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-troff-msvideo"`
### `inlineread only[ApplicationXUstar](#ApplicationXUstar):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-ustar"`
### `inlineread only[ApplicationXVisio](#ApplicationXVisio):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-visio"`
### `inlineread only[ApplicationXVndAudioexplosionMzz](#ApplicationXVndAudioexplosionMzz):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-vnd.audioexplosion.mzz"`
### `inlineread only[ApplicationXVndLsXpix](#ApplicationXVndLsXpix):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-vnd.ls-xpix"`
### `inlineread only[ApplicationXVrml](#ApplicationXVrml):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-vrml"`
### `inlineread only[ApplicationXWaisSource](#ApplicationXWaisSource):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-wais-source"`
### `inlineread only[ApplicationXWintalk](#ApplicationXWintalk):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-wintalk"`
### `inlineread only[ApplicationXWorld](#ApplicationXWorld):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-world"`
### `inlineread only[ApplicationXX509CaCert](#ApplicationXX509CaCert):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/x-x509-ca-cert"`
### `inlineread only[ApplicationXml](#ApplicationXml):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "application/xml"`
### `inlineread only[AudioAiff](#AudioAiff):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/aiff"`
### `inlineread only[AudioBasic](#AudioBasic):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/basic"`
### `inlineread only[AudioIt](#AudioIt):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/it"`
### `inlineread only[AudioMake](#AudioMake):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/make"`
### `inlineread only[AudioMid](#AudioMid):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/mid"`
### `inlineread only[AudioMidi](#AudioMidi):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/midi"`
### `inlineread only[AudioMod](#AudioMod):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/mod"`
### `inlineread only[AudioMpeg](#AudioMpeg):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/mpeg"`
### `inlineread only[AudioMpeg3](#AudioMpeg3):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/mpeg3"`
### `inlineread only[AudioNspaudio](#AudioNspaudio):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/nspaudio"`
### `inlineread only[AudioS3m](#AudioS3m):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/s3m"`
### `inlineread only[AudioTspAudio](#AudioTspAudio):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/tsp-audio"`
### `inlineread only[AudioVndQcelp](#AudioVndQcelp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/vnd.qcelp"`
### `inlineread only[AudioVoc](#AudioVoc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/voc"`
### `inlineread only[AudioVoxware](#AudioVoxware):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/voxware"`
### `inlineread only[AudioWav](#AudioWav):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/wav"`
### `inlineread only[AudioXAiff](#AudioXAiff):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-aiff"`
### `inlineread only[AudioXGsm](#AudioXGsm):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-gsm"`
### `inlineread only[AudioXJam](#AudioXJam):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-jam"`
### `inlineread only[AudioXLiveaudio](#AudioXLiveaudio):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-liveaudio"`
### `inlineread only[AudioXMpequrl](#AudioXMpequrl):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-mpequrl"`
### `inlineread only[AudioXPnRealaudio](#AudioXPnRealaudio):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-pn-realaudio"`
### `inlineread only[AudioXPnRealaudioPlugin](#AudioXPnRealaudioPlugin):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-pn-realaudio-plugin"`
### `inlineread only[AudioXPsid](#AudioXPsid):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-psid"`
### `inlineread only[AudioXTwinvq](#AudioXTwinvq):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-twinvq"`
### `inlineread only[AudioXTwinvqPlugin](#AudioXTwinvqPlugin):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-twinvq-plugin"`
### `inlineread only[AudioXVndAudioexplosionMjuicemediafile](#AudioXVndAudioexplosionMjuicemediafile):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/x-vnd.audioexplosion.mjuicemediafile"`
### `inlineread only[AudioXm](#AudioXm):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "audio/xm"`
### `inlineread only[ChemicalXPdb](#ChemicalXPdb):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "chemical/x-pdb"`
### `inlineread only[DrawingXDwf](#DrawingXDwf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "drawing/x-dwf (old)"`
### `inlineread only[IWorldIVrml](#IWorldIVrml):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "i-world/i-vrml"`
### `inlineread only[ImageBmp](#ImageBmp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/bmp"`
### `inlineread only[ImageCmuRaster](#ImageCmuRaster):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/cmu-raster"`
### `inlineread only[ImageFif](#ImageFif):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/fif"`
### `inlineread only[ImageFlorian](#ImageFlorian):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/florian"`
### `inlineread only[ImageG3fax](#ImageG3fax):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/g3fax"`
### `inlineread only[ImageGif](#ImageGif):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/gif"`
### `inlineread only[ImageIef](#ImageIef):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/ief"`
### `inlineread only[ImageJpeg](#ImageJpeg):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/jpeg"`
### `inlineread only[ImageJutvision](#ImageJutvision):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/jutvision"`
### `inlineread only[ImageNaplps](#ImageNaplps):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/naplps"`
### `inlineread only[ImagePict](#ImagePict):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/pict"`
### `inlineread only[ImagePng](#ImagePng):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/png"`
### `inlineread only[ImageTiff](#ImageTiff):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/tiff"`
### `inlineread only[ImageVasa](#ImageVasa):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/vasa"`
### `inlineread only[ImageVndDwg](#ImageVndDwg):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/vnd.dwg"`
### `inlineread only[ImageVndFpx](#ImageVndFpx):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/vnd.fpx"`
### `inlineread only[ImageVndRnRealflash](#ImageVndRnRealflash):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/vnd.rn-realflash"`
### `inlineread only[ImageVndRnRealpix](#ImageVndRnRealpix):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/vnd.rn-realpix"`
### `inlineread only[ImageVndWapWbmp](#ImageVndWapWbmp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/vnd.wap.wbmp"`
### `inlineread only[ImageVndXiff](#ImageVndXiff):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/vnd.xiff"`
### `inlineread only[ImageWebp](#ImageWebp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/webp"`
### `inlineread only[ImageXIcon](#ImageXIcon):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-icon"`
### `inlineread only[ImageXJg](#ImageXJg):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-jg"`
### `inlineread only[ImageXJps](#ImageXJps):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-jps"`
### `inlineread only[ImageXNiff](#ImageXNiff):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-niff"`
### `inlineread only[ImageXPcx](#ImageXPcx):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-pcx"`
### `inlineread only[ImageXPict](#ImageXPict):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-pict"`
### `inlineread only[ImageXPortableBitmap](#ImageXPortableBitmap):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-portable-bitmap"`
### `inlineread only[ImageXPortableGraymap](#ImageXPortableGraymap):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-portable-graymap"`
### `inlineread only[ImageXPortablePixmap](#ImageXPortablePixmap):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-portable-pixmap"`
### `inlineread only[ImageXQuicktime](#ImageXQuicktime):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-quicktime"`
### `inlineread only[ImageXRgb](#ImageXRgb):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-rgb"`
### `inlineread only[ImageXWindowsBmp](#ImageXWindowsBmp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-windows-bmp"`
### `inlineread only[ImageXXbitmap](#ImageXXbitmap):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-xbitmap"`
### `inlineread only[ImageXXpixmap](#ImageXXpixmap):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-xpixmap"`
### `inlineread only[ImageXXwd](#ImageXXwd):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "image/x-xwd"`
### `inlineread only[MessageRfc822](#MessageRfc822):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "message/rfc822"`
### `inlineread only[ModelVrml](#ModelVrml):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "model/vrml"`
### `inlineread only[ModelXPov](#ModelXPov):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "model/x-pov"`
### `inlineread only[MultipartXZip](#MultipartXZip):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "multipart/x-zip"`
### `inlineread only[PaleovuXPv](#PaleovuXPv):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "paleovu/x-pv"`
### `inlineread only[TextAsp](#TextAsp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/asp"`
### `inlineread only[TextCss](#TextCss):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/css"`
### `inlineread only[TextHtml](#TextHtml):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/html"`
### `inlineread only[TextJavascript](#TextJavascript):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/javascript"`
### `inlineread only[TextPascal](#TextPascal):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/pascal"`
### `inlineread only[TextPlain](#TextPlain):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/plain"`
### `inlineread only[TextRichtext](#TextRichtext):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/richtext"`
### `inlineread only[TextScriplet](#TextScriplet):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/scriplet"`
### `inlineread only[TextTabSeparatedValues](#TextTabSeparatedValues):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/tab-separated-values"`
### `inlineread only[TextUriList](#TextUriList):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/uri-list"`
### `inlineread only[TextVndAbc](#TextVndAbc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/vnd.abc"`
### `inlineread only[TextVndFmiFlexstor](#TextVndFmiFlexstor):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/vnd.fmi.flexstor"`
### `inlineread only[TextVndWapWml](#TextVndWapWml):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/vnd.wap.wml"`
### `inlineread only[TextVndWapWmlscript](#TextVndWapWmlscript):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/vnd.wap.wmlscript"`
### `inlineread only[TextWebviewhtml](#TextWebviewhtml):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/webviewhtml"`
### `inlineread only[TextXAsm](#TextXAsm):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-asm"`
### `inlineread only[TextXAudiosoftIntra](#TextXAudiosoftIntra):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-audiosoft-intra"`
### `inlineread only[TextXC](#TextXC):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-c"`
### `inlineread only[TextXComponent](#TextXComponent):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-component"`
### `inlineread only[TextXFortran](#TextXFortran):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-fortran"`
### `inlineread only[TextXLaAsf](#TextXLaAsf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-la-asf"`
### `inlineread only[TextXPascal](#TextXPascal):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-pascal"`
### `inlineread only[TextXScript](#TextXScript):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-script"`
### `inlineread only[TextXScriptElisp](#TextXScriptElisp):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-script.elisp"`
### `inlineread only[TextXScriptPhyton](#TextXScriptPhyton):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-script.phyton"`
### `inlineread only[TextXScriptRexx](#TextXScriptRexx):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-script.rexx"`
### `inlineread only[TextXScriptTcsh](#TextXScriptTcsh):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-script.tcsh"`
### `inlineread only[TextXScriptZsh](#TextXScriptZsh):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-script.zsh"`
### `inlineread only[TextXServerParsedHtml](#TextXServerParsedHtml):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-server-parsed-html"`
### `inlineread only[TextXSetext](#TextXSetext):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-setext"`
### `inlineread only[TextXSpeech](#TextXSpeech):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-speech"`
### `inlineread only[TextXUil](#TextXUil):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-uil"`
### `inlineread only[TextXUuencode](#TextXUuencode):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-uuencode"`
### `inlineread only[TextXVcalendar](#TextXVcalendar):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "text/x-vcalendar"`
### `inlineread only[VideoAnimaflex](#VideoAnimaflex):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/animaflex"`
### `inlineread only[VideoAvi](#VideoAvi):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/avi"`
### `inlineread only[VideoAvsVideo](#VideoAvsVideo):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/avs-video"`
### `inlineread only[VideoDl](#VideoDl):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/dl"`
### `inlineread only[VideoFli](#VideoFli):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/fli"`
### `inlineread only[VideoGl](#VideoGl):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/gl"`
### `inlineread only[VideoMpeg](#VideoMpeg):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/mpeg"`
### `inlineread only[VideoMsvideo](#VideoMsvideo):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/msvideo"`
### `inlineread only[VideoQuicktime](#VideoQuicktime):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/quicktime"`
### `inlineread only[VideoVdo](#VideoVdo):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/vdo"`
### `inlineread only[VideoVivo](#VideoVivo):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/vivo"`
### `inlineread only[VideoVndRnRealvideo](#VideoVndRnRealvideo):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/vnd.rn-realvideo"`
### `inlineread only[VideoVosaic](#VideoVosaic):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/vosaic"`
### `inlineread only[VideoXAmtDemorun](#VideoXAmtDemorun):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/x-amt-demorun"`
### `inlineread only[VideoXAmtShowrun](#VideoXAmtShowrun):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/x-amt-showrun"`
### `inlineread only[VideoXAtomic3dFeature](#VideoXAtomic3dFeature):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/x-atomic3d-feature"`
### `inlineread only[VideoXDv](#VideoXDv):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/x-dv"`
### `inlineread only[VideoXIsvideo](#VideoXIsvideo):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/x-isvideo"`
### `inlineread only[VideoXMotionJpeg](#VideoXMotionJpeg):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/x-motion-jpeg"`
### `inlineread only[VideoXMsAsf](#VideoXMsAsf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/x-ms-asf"`
### `inlineread only[VideoXMsvideo](#VideoXMsvideo):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/x-msvideo"`
### `inlineread only[VideoXQtc](#VideoXQtc):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/x-qtc"`
### `inlineread only[VideoXSgiMovie](#VideoXSgiMovie):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "video/x-sgi-movie"`
### `inlineread only[WindowsMetafile](#WindowsMetafile):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "windows/metafile"`
### `inlineread only[WwwMime](#WwwMime):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "www/mime"`
### `inlineread only[XConferenceXCooltalk](#XConferenceXCooltalk):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "x-conference/x-cooltalk"`
### `inlineread only[XWorldX3dmf](#XWorldX3dmf):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "x-world/x-3dmf"`
### `inlineread only[XWorldXVrt](#XWorldXVrt):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "x-world/x-vrt"`
### `inlineread only[XglDrawing](#XglDrawing):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "xgl/drawing"`
### `inlineread only[XglMovie](#XglMovie):[Mime](mime "haxe.io.Mime - HTML MimeType EnumSee also:http://www.") = "xgl/movie"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Mime.htmlScheme([String](../../string "String - The basic String class."))
==================================================================
package [haxe.io](index)
from [String](../../string "String - The basic String class.") to [String](../../string "String - The basic String class.")
*Available on all platforms*
A scheme consists of a sequence of characters beginning with a letter and followed by any combination of letters, digits, plus (`+`, period (`.`), or hyphen (`-`).
Although schemes are case-insensitive, the canonical form is lowercase and documents that specify schemes must do so with lowercase letters. It is followed by a colon (`:`).
Variables
---------
### `inlineread only[Data](#Data):[Scheme](scheme "haxe.io.Scheme - A scheme consists of a sequence of characters beginning with a letter and followed by any combination of letters, digits, plus (+, period (.") = "data"`
### `inlineread only[File](#File):[Scheme](scheme "haxe.io.Scheme - A scheme consists of a sequence of characters beginning with a letter and followed by any combination of letters, digits, plus (+, period (.") = "file"`
### `inlineread only[Ftp](#Ftp):[Scheme](scheme "haxe.io.Scheme - A scheme consists of a sequence of characters beginning with a letter and followed by any combination of letters, digits, plus (+, period (.") = "ftp"`
### `inlineread only[Http](#Http):[Scheme](scheme "haxe.io.Scheme - A scheme consists of a sequence of characters beginning with a letter and followed by any combination of letters, digits, plus (+, period (.") = "http"`
### `inlineread only[Https](#Https):[Scheme](scheme "haxe.io.Scheme - A scheme consists of a sequence of characters beginning with a letter and followed by any combination of letters, digits, plus (+, period (.") = "https"`
### `inlineread only[MailTo](#MailTo):[Scheme](scheme "haxe.io.Scheme - A scheme consists of a sequence of characters beginning with a letter and followed by any combination of letters, digits, plus (+, period (.") = "mailto"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/Scheme.htmlStringInput
============
package [haxe.io](index)
extends [BytesInput](bytesinput "haxe.io.BytesInput") › [Input](input "haxe.io.Input - An Input is an abstract reader.")
*Available on all platforms*
Constructor
-----------
### `[new](#new)(s:[String](../../string "String - The basic String class."))`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/StringInput.htmlUInt16Array([UInt16ArrayData](uint16arraydata "haxe.io.UInt16ArrayData"))
==========================================================================
package [haxe.io](index)
*Available on all platforms*
Static variables
----------------
### `staticinlineread only[BYTES_PER_ELEMENT](#BYTES_PER_ELEMENT):[Int](../../int "Int - The standard Int type.") = 2`
Static methods
--------------
### `static[fromArray](#fromArray)(a:[Array](../../array "Array")<[Int](../../int "Int - The standard Int type.")>, pos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[UInt16Array](uint16array "haxe.io.UInt16Array")`
### `static[fromBytes](#fromBytes)(bytes:[Bytes](bytes "haxe.io.Bytes"), bytePos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[UInt16Array](uint16array "haxe.io.UInt16Array")`
### `static[fromData](#fromData)(d:[UInt16ArrayData](uint16arraydata "haxe.io.UInt16ArrayData")):[UInt16Array](uint16array "haxe.io.UInt16Array")`
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
### `read only[view](#view):[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
Methods
-------
### `inline[get](#get)(index:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
### `inline[getData](#getData)():[UInt16ArrayData](uint16arraydata "haxe.io.UInt16ArrayData")`
### `inline[get_view](#get_view)():[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
### `inline[set](#set)(index:[Int](../../int "Int - The standard Int type."), value:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
### `inline[sub](#sub)(begin:[Int](../../int "Int - The standard Int type."), ?length:[Int](../../int "Int - The standard Int type.")):[UInt16Array](uint16array "haxe.io.UInt16Array")`
### `inline[subarray](#subarray)(?begin:[Int](../../int "Int - The standard Int type."), ?end:[Int](../../int "Int - The standard Int type.")):[UInt16Array](uint16array "haxe.io.UInt16Array")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/UInt16Array.htmlUInt16ArrayData
================
package [haxe.io](index)
import [haxe.io.UInt16Array](uint16array)
*Available on all platforms*
#### flash, lua, php, cs, java, cpp, neko, hl, python, macro
*alias for* `[haxe.io.ArrayBufferViewData](arraybufferviewdata "haxe.io.ArrayBufferViewData")`
#### js
*alias for* `[js.lib.Uint16Array](https://api.haxe.org/js/lib/Uint16Array.html "js.lib.Uint16Array - The Uint16Array typed array represents an array of 16-bit unsigned integers in the platform byte order.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/UInt16ArrayData.htmlUInt32Array([UInt32ArrayData](uint32arraydata "haxe.io.UInt32ArrayData"))
==========================================================================
package [haxe.io](index)
*Available on all platforms*
Static variables
----------------
### `staticinlineread only[BYTES_PER_ELEMENT](#BYTES_PER_ELEMENT):[Int](../../int "Int - The standard Int type.") = 4`
Static methods
--------------
### `static[fromArray](#fromArray)(a:[Array](../../array "Array")<[UInt](../../uint "UInt - The unsigned Int type is only defined for Flash and C#.")>, pos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[UInt32Array](uint32array "haxe.io.UInt32Array")`
### `static[fromBytes](#fromBytes)(bytes:[Bytes](bytes "haxe.io.Bytes"), bytePos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[UInt32Array](uint32array "haxe.io.UInt32Array")`
### `static[fromData](#fromData)(d:[UInt32ArrayData](uint32arraydata "haxe.io.UInt32ArrayData")):[UInt32Array](uint32array "haxe.io.UInt32Array")`
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
### `read only[view](#view):[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
Methods
-------
### `inline[get](#get)(index:[Int](../../int "Int - The standard Int type.")):[UInt](../../uint "UInt - The unsigned Int type is only defined for Flash and C#.")`
### `inline[getData](#getData)():[UInt32ArrayData](uint32arraydata "haxe.io.UInt32ArrayData")`
### `inline[get_view](#get_view)():[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
### `inline[set](#set)(index:[Int](../../int "Int - The standard Int type."), value:[UInt](../../uint "UInt - The unsigned Int type is only defined for Flash and C#.")):[UInt](../../uint "UInt - The unsigned Int type is only defined for Flash and C#.")`
### `inline[sub](#sub)(begin:[Int](../../int "Int - The standard Int type."), ?length:[Int](../../int "Int - The standard Int type.")):[UInt32Array](uint32array "haxe.io.UInt32Array")`
### `inline[subarray](#subarray)(?begin:[Int](../../int "Int - The standard Int type."), ?end:[Int](../../int "Int - The standard Int type.")):[UInt32Array](uint32array "haxe.io.UInt32Array")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/UInt32Array.htmlUInt32ArrayData
================
package [haxe.io](index)
import [haxe.io.UInt32Array](uint32array)
*Available on all platforms*
#### flash, lua, php, cs, java, cpp, neko, hl, python, macro
*alias for* `[haxe.io.ArrayBufferViewData](arraybufferviewdata "haxe.io.ArrayBufferViewData")`
#### js
*alias for* `[js.lib.Uint32Array](https://api.haxe.org/js/lib/Uint32Array.html "js.lib.Uint32Array - The Uint32Array typed array represents an array of 32-bit unsigned integers in the platform byte order.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/UInt32ArrayData.htmlUInt8Array([UInt8ArrayData](uint8arraydata "haxe.io.UInt8ArrayData"))
======================================================================
package [haxe.io](index)
*Available on all platforms*
Static variables
----------------
### `staticinlineread only[BYTES_PER_ELEMENT](#BYTES_PER_ELEMENT):[Int](../../int "Int - The standard Int type.") = 1`
Static methods
--------------
### `static[fromArray](#fromArray)(a:[Array](../../array "Array")<[Int](../../int "Int - The standard Int type.")>, pos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[UInt8Array](uint8array "haxe.io.UInt8Array")`
### `static[fromBytes](#fromBytes)(bytes:[Bytes](bytes "haxe.io.Bytes"), bytePos:[Int](../../int "Int - The standard Int type.") = 0, ?length:[Int](../../int "Int - The standard Int type.")):[UInt8Array](uint8array "haxe.io.UInt8Array")`
### `static[fromData](#fromData)(d:[UInt8ArrayData](uint8arraydata "haxe.io.UInt8ArrayData")):[UInt8Array](uint8array "haxe.io.UInt8Array")`
Variables
---------
### `read only[length](#length):[Int](../../int "Int - The standard Int type.")`
### `read only[view](#view):[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
Methods
-------
### `inline[get](#get)(index:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
### `inline[getData](#getData)():[UInt8ArrayData](uint8arraydata "haxe.io.UInt8ArrayData")`
### `inline[get_view](#get_view)():[ArrayBufferView](arraybufferview "haxe.io.ArrayBufferView")`
### `inline[set](#set)(index:[Int](../../int "Int - The standard Int type."), value:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
### `inline[sub](#sub)(begin:[Int](../../int "Int - The standard Int type."), ?length:[Int](../../int "Int - The standard Int type.")):[UInt8Array](uint8array "haxe.io.UInt8Array")`
### `inline[subarray](#subarray)(?begin:[Int](../../int "Int - The standard Int type."), ?end:[Int](../../int "Int - The standard Int type.")):[UInt8Array](uint8array "haxe.io.UInt8Array")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/UInt8Array.htmlUInt8ArrayData
===============
package [haxe.io](index)
import [haxe.io.UInt8Array](uint8array)
*Available on all platforms*
#### flash, lua, php, cs, java, cpp, neko, hl, python, macro
*alias for* `[haxe.io.ArrayBufferViewData](arraybufferviewdata "haxe.io.ArrayBufferViewData")`
#### js
*alias for* `[js.lib.Uint8Array](https://api.haxe.org/js/lib/Uint8Array.html "js.lib.Uint8Array - The Uint8Array typed array represents an array of 8-bit unsigned integers.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/io/UInt8ArrayData.htmlDynamicAccessIterator<T>
=========================
package [haxe.iterators](index)
*Available on all platforms*
This iterator can be used to iterate over the values of `[haxe.DynamicAccess](../dynamicaccess#DynamicAccess)`.
Constructor
-----------
### `inline[new](#new)(access:[DynamicAccess](../dynamicaccess "haxe.DynamicAccess - DynamicAccess is an abstract type for working with anonymous structures that are intended to hold collections of objects by the string key.")<T>)`
Methods
-------
### `inline[hasNext](#hasNext)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Iterator.hasNext](../../iterator#hasNext)`
### `inline[next](#next)():T`
### See `[Iterator.next](../../iterator#next)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/iterators/DynamicAccessIterator.htmlDynamicAccessKeyValueIterator<T>
=================================
package [haxe.iterators](index)
*Available on all platforms*
This Key/Value iterator can be used to iterate over `[haxe.DynamicAccess](../dynamicaccess#DynamicAccess)`.
Constructor
-----------
### `inline[new](#new)(access:[DynamicAccess](../dynamicaccess "haxe.DynamicAccess - DynamicAccess is an abstract type for working with anonymous structures that are intended to hold collections of objects by the string key.")<T>)`
Methods
-------
### `inline[hasNext](#hasNext)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Iterator.hasNext](../../iterator#hasNext)`
### `inline[next](#next)():{value:T, key:[String](../../string "String - The basic String class.")}`
### See `[Iterator.next](../../iterator#next)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/iterators/DynamicAccessKeyValueIterator.htmlHashMapKeyValueIterator<K, V>
==============================
package [haxe.iterators](index)
*Available on all platforms*
Constructor
-----------
### `inline[new](#new)(map:[HashMap](../ds/hashmap "haxe.ds.HashMap - HashMap allows mapping of hashable objects to arbitrary values.")<K, V>)`
Methods
-------
### `inline[hasNext](#hasNext)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Iterator.hasNext](../../iterator#hasNext)`
### `inline[next](#next)():{value:V, key:K}`
### See `[Iterator.next](../../iterator#next)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/iterators/HashMapKeyValueIterator.htmlMapKeyValueIterator<K, V>
==========================
package [haxe.iterators](index)
*Available on all platforms*
This Key/Value iterator can be used to iterate across maps.
Constructor
-----------
### `inline[new](#new)(map:[IMap](../imap "haxe.IMap")<K, V>)`
Methods
-------
### `inline[hasNext](#hasNext)():[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### See `[Iterator.hasNext](../../iterator#hasNext)`
### `inline[next](#next)():{value:V, key:K}`
### See `[Iterator.next](../../iterator#next)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/iterators/MapKeyValueIterator.htmlAbstractType
=============
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents an abstract type.
Fields
------
### `[unops](#unops):[Array](../../array "Array")<{postFix:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."), op:[Unop](unop "haxe.macro.Unop - A unary operator."), field:[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")}>`
The defined unary operators of the abstract.### `[type](#type):[Type](type "haxe.macro.Type - Represents a type.")`
The underlying type of the abstract.### `[to](#to):[Array](../../array "Array")<{t:[Type](type "haxe.macro.Type - Represents a type."), field:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>}>`
The available implicit to-casts of the abstract. @see https://haxe.org/manual/types-abstract-implicit-casts.html### `[resolveWrite](#resolveWrite):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>`
The method used for resolving unknown field access, if available.### `[resolve](#resolve):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>`
The method used for resolving unknown field access, if available.### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the type.### `[params](#params):[Array](../../array "Array")<[TypeParameter](typeparameter "haxe.macro.TypeParameter - Represents the declaration of type parameters.")>`
The type parameters of the type.### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
The package of the type.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the type.### `[module](#module):[String](../../string "String - The basic String class.")`
The module name of the type, which might be different.### `[meta](#meta):[MetaAccess](metaaccess "haxe.macro.MetaAccess - MetaAccess is a wrapper for the Metadata array.")`
The metadata of the type.### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is private.### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is extern.### `[impl](#impl):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>>`
The implementation class of the abstract, if available.### `[from](#from):[Array](../../array "Array")<{t:[Type](type "haxe.macro.Type - Represents a type."), field:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>}>`
The available implicit from-casts of the abstract. @see https://haxe.org/manual/types-abstract-implicit-casts.html### `[exclude](#exclude)():[Void](../../void "Void - The standard Void type.")`
Allows excluding the type from compilation.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The associated documentation of the class field.### `[binops](#binops):[Array](../../array "Array")<{op:[Binop](binop "haxe.macro.Binop - A binary operator."), field:[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")}>`
The defined binary operators of the abstract.### `[array](#array):[Array](../../array "Array")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>`
The defined array-access fields of the abstract.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/AbstractType.htmlAccess
=======
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents an access modifier.
See also:
* <https://haxe.org/manual/class-field-access-modifier.htmlValues
------
### `APublic`
Public access modifier, grants access from anywhere.
See also:
* <https://haxe.org/manual/class-field-visibility.html### `APrivate`
Private access modifier, grants access to class and its sub-classes only.
See also:
* <https://haxe.org/manual/class-field-visibility.html### `AStatic`
Static access modifier.
### `AOverride`
Override access modifier.
See also:
* <https://haxe.org/manual/class-field-override.html### `ADynamic`
Dynamic (re-)bindable access modifier.
See also:
* <https://haxe.org/manual/class-field-dynamic.html### `AInline`
Inline access modifier. Allows expressions to be directly inserted in place of calls to them.
See also:
* <https://haxe.org/manual/class-field-inline.html### `AMacro`
Macro access modifier. Allows expression macro functions. These are normal functions which are executed as soon as they are typed.
### `AFinal`
Final access modifier. For functions, they can not be overridden. For variables, it means they can be assigned to only once.
### `AExtern`
Extern access modifier.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Access.htmlAnonStatus
===========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents the kind of the anonymous structure type.
Values
------
### `AClosed`
A closed structure is considered complete. That is, no further fields can be added to it.
### `AOpened`
An open structure allows having additional fields added to it, which is used during type inference. It is closed upon unification.
### `AConst`
A const structure is one that appears directly in syntax. It cannot be assigned to a smaller structure type (that is, it does not allow structural sub-typing).
### `AExtend(tl:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>>)`
Represents a structure which extends one or multiple structures defined in `tl`.
See also:
* <https://haxe.org/manual/type-system-extensions.html### `AClassStatics(t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>)`
A structure that represents the static fields of a class.
### `AEnumStatics(t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[EnumType](enumtype "haxe.macro.EnumType - Represents an enum type.")>)`
A structure that represents the constructors of an enum.
### `AAbstractStatics(t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[AbstractType](abstracttype "haxe.macro.AbstractType - Represents an abstract type.")>)`
A structure that represents the static fields of an abstract.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/AnonStatus.htmlAnonType
=========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents information for anonymous structure types.
Fields
------
### `[status](#status):[AnonStatus](anonstatus "haxe.macro.AnonStatus - Represents the kind of the anonymous structure type.")`
The status/kind of the structure.### `[fields](#fields):[Array](../../array "Array")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>`
The class fields of the structure.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/AnonType.htmlBaseType
=========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
The information that all types (`[ClassType](classtype#ClassType)`, `[EnumType](enumtype#EnumType)`, `[DefType](deftype#DefType)`, `[AbstractType](abstracttype#AbstractType)`) have in common.
Fields
------
### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the type.### `[params](#params):[Array](../../array "Array")<[TypeParameter](typeparameter "haxe.macro.TypeParameter - Represents the declaration of type parameters.")>`
The type parameters of the type.### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
The package of the type.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the type.### `[module](#module):[String](../../string "String - The basic String class.")`
The module name of the type, which might be different.### `[meta](#meta):[MetaAccess](metaaccess "haxe.macro.MetaAccess - MetaAccess is a wrapper for the Metadata array.")`
The metadata of the type.### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is private.### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is extern.### `[exclude](#exclude)():[Void](../../void "Void - The standard Void type.")`
Allows excluding the type from compilation.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The associated documentation of the class field.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/BaseType.htmlClassType
==========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents a class type.
Fields
------
### `[superClass](#superClass):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>, params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>}>`
The parent class and its type parameters, if available.### `[statics](#statics):[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[Array](../../array "Array")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>>`
The static fields of the class.### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the type.### `[params](#params):[Array](../../array "Array")<[TypeParameter](typeparameter "haxe.macro.TypeParameter - Represents the declaration of type parameters.")>`
The type parameters of the type.### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
The package of the type.### `[overrides](#overrides):[Array](../../array "Array")<[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>>`
The list of fields that have override status.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the type.### `[module](#module):[String](../../string "String - The basic String class.")`
The module name of the type, which might be different.### `[meta](#meta):[MetaAccess](metaaccess "haxe.macro.MetaAccess - MetaAccess is a wrapper for the Metadata array.")`
The metadata of the type.### `[kind](#kind):[ClassKind](classkind "haxe.macro.ClassKind - Represents the kind of a class.")`
The kind of the class.### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is private.### `[isInterface](#isInterface):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
If true the type is an interface, otherwise it is a class.### `[isFinal](#isFinal):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
If true the class is final and cannot be extended.### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is extern.### `[interfaces](#interfaces):[Array](../../array "Array")<{t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>, params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>}>`
The implemented interfaces and their type parameters.### `[init](#init):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>`
The `__init__` expression of the class, if available.### `[fields](#fields):[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[Array](../../array "Array")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>>`
The member fields of the class.### `[exclude](#exclude)():[Void](../../void "Void - The standard Void type.")`
Allows excluding the type from compilation.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The associated documentation of the class field.### `[constructor](#constructor):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>>`
The constructor of the class, if available.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ClassType.htmlEnumType
=========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents an enum type.
Fields
------
### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the type.### `[params](#params):[Array](../../array "Array")<[TypeParameter](typeparameter "haxe.macro.TypeParameter - Represents the declaration of type parameters.")>`
The type parameters of the type.### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
The package of the type.### `[names](#names):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
An ordered list of enum constructor names.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the type.### `[module](#module):[String](../../string "String - The basic String class.")`
The module name of the type, which might be different.### `[meta](#meta):[MetaAccess](metaaccess "haxe.macro.MetaAccess - MetaAccess is a wrapper for the Metadata array.")`
The metadata of the type.### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is private.### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is extern.### `[exclude](#exclude)():[Void](../../void "Void - The standard Void type.")`
Allows excluding the type from compilation.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The associated documentation of the class field.### `[constructs](#constructs):[Map](../../map "Map")<[String](../../string "String - The basic String class."), [EnumField](enumfield "haxe.macro.EnumField - Represents an enum constructor.")>`
The available enum constructors.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/EnumType.htmlDefType
========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents a typedef.
Fields
------
### `[type](#type):[Type](type "haxe.macro.Type - Represents a type.")`
The target type of the typedef.### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the type.### `[params](#params):[Array](../../array "Array")<[TypeParameter](typeparameter "haxe.macro.TypeParameter - Represents the declaration of type parameters.")>`
The type parameters of the type.### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
The package of the type.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the type.### `[module](#module):[String](../../string "String - The basic String class.")`
The module name of the type, which might be different.### `[meta](#meta):[MetaAccess](metaaccess "haxe.macro.MetaAccess - MetaAccess is a wrapper for the Metadata array.")`
The metadata of the type.### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is private.### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is extern.### `[exclude](#exclude)():[Void](../../void "Void - The standard Void type.")`
Allows excluding the type from compilation.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The associated documentation of the class field.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/DefType.htmlClassField
===========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents a class field.
Fields
------
### `[type](#type):[Type](type "haxe.macro.Type - Represents a type.")`
The type of the class field.### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the class field.### `[params](#params):[Array](../../array "Array")<[TypeParameter](typeparameter "haxe.macro.TypeParameter - Represents the declaration of type parameters.")>`
The type parameters of the class field.### `[overloads](#overloads):[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[Array](../../array "Array")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>>`
The overload fields of the class field.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the class field.### `[meta](#meta):[MetaAccess](metaaccess "haxe.macro.MetaAccess - MetaAccess is a wrapper for the Metadata array.")`
The metadata of the class field.### `[kind](#kind):[FieldKind](fieldkind "haxe.macro.FieldKind - Represents a field kind.")`
The class field kind.### `[isPublic](#isPublic):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the class field is public.### `[isFinal](#isFinal):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the class field is final.### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the class field is extern.### `[expr](#expr)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>`
Returns the typed expression of the class field.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The associated documentation of the class field.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ClassField.htmlBinop
======
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
A binary operator.
See also:
* <https://haxe.org/manual/types-numeric-operators.htmlValues
------
### `OpAdd`
### `+`
### `OpMult`
### `*`
### `OpDiv`
### `/`
### `OpSub`
### `-`
### `OpAssign`
### `=`
### `OpEq`
### `==`
### `OpNotEq`
### `!=`
### `OpGt`
### `>`
### `OpGte`
### `>=`
### `OpLt`
### `<`
### `OpLte`
### `<=`
### `OpAnd`
### `&`
### `OpOr`
` | ` |
### `OpXor`
### `^`
### `OpBoolAnd`
### `&&`
### `OpBoolOr`
` | ` |
### `OpShl`
### `<<`
### `OpShr`
### `>>`
### `OpUShr`
### `>>>`
### `OpMod`
### `%`
### `OpAssignOp(op:[Binop](binop "haxe.macro.Binop - A binary operator."))`
`+=` `-=` `/=` `*=` `<<=` `>>=` `>>>=` ` | =&=^=%=` |
### `OpInterval`
### `...`
### `OpArrow`
### `=>`
### `OpIn`
### `in`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Binop.htmlCase
=====
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a switch case.
See also:
* <https://haxe.org/manual/expression-switch.htmlFields
------
### `[values](#values):[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>`
The value expressions of the case.### `optional[guard](#guard):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>`
The optional guard expressions of the case, if available.### `[expr](#expr):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>`
The expression of the case, if available.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Case.htmlCatch
======
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a catch in the AST.
See also:
* <https://haxe.org/manual/expression-try-catch.htmlFields
------
### `optional[type](#type):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>`
The type of the catch.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the catch variable.### `[expr](#expr):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
The expression of the catch.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Catch.htmlClassKind
==========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents the kind of a class.
Values
------
### `KNormal`
A normal class.
### `KTypeParameter(constraints:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>)`
A type parameter class with a set of constraints.
### `KExtension(cl:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>, params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>)`
A structurally extended class.
@deprecated
### `KExpr(expr:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
A special kind of class to encode expressions into type parameters.
### `KGeneric`
A `@:generic` base class.
### `KGenericInstance(cl:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>, params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>)`
A concrete `@:generic` instance, referencing the original class and the applied type parameters.
### `KMacroType`
A special class for `[haxe.macro.MacroType](macrotype#MacroType)`.
@deprecated
### `KAbstractImpl(a:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[AbstractType](abstracttype "haxe.macro.AbstractType - Represents an abstract type.")>)`
An implementation class of an abstract, i.e. where all its run-time code is.
### `KGenericBuild`
A `@:genericBuild` class
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ClassKind.htmlCompilationServer
==================
package [haxe.macro](index)
*Available on all platforms*
This class provides some methods which can be invoked from command line using `--macro server.field(args)`.
Static methods
--------------
### `static[invalidateFiles](#invalidateFiles)(filePaths:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>):[Void](../../void "Void - The standard Void type.")`
*Available on macro*
Invalidates all files given in `filePaths`, removing them from the cache.
### `static[setModuleCheckPolicy](#setModuleCheckPolicy)(pathFilters:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>, policy:[Array](../../array "Array")<[ModuleCheckPolicy](modulecheckpolicy "haxe.macro.ModuleCheckPolicy")>, recursive:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true, contextOptions:[ContextOptions](contextoptions "haxe.macro.ContextOptions") = NormalContext):[Void](../../void "Void - The standard Void type.")`
*Available on macro*
Sets the `[ModuleCheckPolicy](modulecheckpolicy#ModuleCheckPolicy)` of all files whose dot-path matches an element of `pathFilters`.
If `recursive` is true, a dot-path is considered matched if it starts with the path filter. This automatically applies to path filters of packages. Otherwise an exact match is required.
If an element in `pathFilters` is the empty String `""` it matches everything (if `recursive = [true](../../bool)`) or only top-level types (if `recursive = [false](../../bool)`).
The argument `contextOptions` determines which context (normal, macro or both) this affects.
If a call to this function is added to the compilation parameters, the compilation server should be restarted to ensure it takes effect.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/CompilationServer.htmlCompiler
=========
package [haxe.macro](index)
*Available on all platforms*
All these methods can be called for compiler configuration macros.
Static methods
--------------
### `static[addClassPath](#addClassPath)(path:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Add a class path where ".hx" source files or packages (sub-directories) can be found.
Usage of this function outside of initialization macros is deprecated and may cause compilation server issues.
### `static[addGlobalMetadata](#addGlobalMetadata)(pathFilter:[String](../../string "String - The basic String class."), meta:[String](../../string "String - The basic String class."), recursive:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true, toTypes:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true, toFields:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = false):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Adds metadata `meta` to all types (if `toTypes = [true](../../bool)`) or fields (if `toFields = [true](../../bool)`) whose dot-path matches `pathFilter`.
If `recursive` is true a dot-path is considered matched if it starts with `pathFilter`. This automatically applies to path filters of packages. Otherwise an exact match is required.
If `pathFilter` is the empty String `""` it matches everything (if `recursive = [true](../../bool)`) or only top-level types (if `recursive = [false](../../bool)`).
This operation has no effect if the type has already been loaded, e.g. through `[Context.getType](context#getType)`.
### `static[addMetadata](#addMetadata)(meta:[String](../../string "String - The basic String class."), className:[String](../../string "String - The basic String class."), ?field:[String](../../string "String - The basic String class."), ?isStatic:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Add metadata to a (static) field or class by name. An error is thrown when `className` or `field` is invalid.
### `static[addNativeArg](#addNativeArg)(argument:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Adds an argument to be passed to the native compiler (e.g. `-javac-arg` for Java).
### `static[addNativeLib](#addNativeLib)(name:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Adds a native library depending on the platform (e.g. `-swf-lib` for Flash).
Usage of this function outside of initialization macros is deprecated and may cause compilation server issues.
### `static[allowPackage](#allowPackage)(v:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
### `static[define](#define)(flag:[String](../../string "String - The basic String class."), ?value:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Set a conditional compiler flag.
Usage of this function outside of initialization macros is deprecated and may cause compilation server issues.
### `static[exclude](#exclude)(pack:[String](../../string "String - The basic String class."), rec:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Exclude a specific class, enum, or all classes and enums in a package from being generated. Excluded types become `extern`.
Parameters:
| | |
| --- | --- |
| `rec` | If true, recursively excludes all sub-packages. |
### `static[excludeFile](#excludeFile)(fileName:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Exclude classes and enums listed in an extern file (one per line) from being generated.
### `static[flushDiskCache](#flushDiskCache)():[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Clears cached results of file lookups
### `static[getDefine](#getDefine)(key:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
A conditional compilation flag can be set on the command line using `-D key=value`.
Returns the value of a compiler flag.
If the compiler flag is defined but no value is set, `[Compiler.getDefine](compiler#getDefine)` returns `"1"` (e.g. `-D key`).
If the compiler flag is not defined, `[Compiler.getDefine](compiler#getDefine)` returns `null`.
Note: This is a macro and cannot be called from within other macros. Refer to `[haxe.macro.Context.definedValue](context#definedValue)` to obtain defined values in macro context.
See also:
* <https://haxe.org/manual/lf-condition-compilation.html### `static[getDisplayPos](#getDisplayPos)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{pos:[Int](../../int "Int - The standard Int type."), file:[String](../../string "String - The basic String class.")}>`
*Available on neko, macro*
### `static[getOutput](#getOutput)():[String](../../string "String - The basic String class.")`
*Available on neko, macro*
### `static[include](#include)(pack:[String](../../string "String - The basic String class."), rec:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true, ?ignore:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>, ?classPaths:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>, strict:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = false):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Includes all modules in package `pack` in the compilation.
In order to include single modules, their paths can be listed directly on command line: `haxe ... ModuleName pack.ModuleName`.
By default `[Compiler.include](compiler#include)` will search for modules in the directories defined with `-cp`. If you want to specify a different set of paths to search for modules, you can use the optional argument `classPath`.
Parameters:
| | |
| --- | --- |
| `rec` | If true, recursively adds all sub-packages. |
| `ignore` | Array of module names to ignore for inclusion. You can use `module*` with a * at the end for Wildcard matching |
| `classPaths` | An alternative array of paths (directory names) to use to search for modules to include. Note that if you pass this argument, only the specified paths will be used for inclusion. |
| `strict` | If true and given package wasn't found in any of class paths, fail with an error. |
### `static[includeFile](#includeFile)(file:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), position:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
*Available on js, lua*
Embed a JavaScript or Lua file at compile time (can be called by `--macro` or within an `__init__` method).
### `static[includeFile](#includeFile)(file:[String](../../string "String - The basic String class."), position:[IncludePosition](includeposition "haxe.macro.IncludePosition") = Top):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
*Available on macro*
Embed a JavaScript or Lua file at compile time (can be called by `--macro` or within an `__init__` method).
### `static[keep](#keep)(?path:[String](../../string "String - The basic String class."), ?paths:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>, recursive:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Marks types or packages to be kept by DCE.
This also extends to the sub-types of resolved modules.
In order to include module sub-types directly, their full dot path including the containing module has to be used (e.g. `[msignal.Signal.Signal0](https://api.haxe.org/lua/lib/luv/Signal.html#Signal0)`).
This operation has no effect if the type has already been loaded, e.g. through `[Context.getType](context#getType)`.
Parameters:
| | |
| --- | --- |
| `path` | A package, module or sub-type dot path to keep. |
| `paths` | An Array of package, module or sub-type dot paths to keep. |
| `recursive` | If true, recurses into sub-packages for package paths. |
### `static[nullSafety](#nullSafety)(path:[String](../../string "String - The basic String class."), mode:[NullSafetyMode](nullsafetymode "haxe.macro.NullSafetyMode") = Loose, recursive:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Enables null safety for a type or a package.
Parameters:
| | |
| --- | --- |
| `path` | A package, module or sub-type dot path to enable null safety for. |
| `recursive` | If true, recurses into sub-packages for package paths. |
### `static[patchTypes](#patchTypes)(file:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Load a type patch file that can modify the field types within declared classes and enums.
### `static[removeField](#removeField)(className:[String](../../string "String - The basic String class."), field:[String](../../string "String - The basic String class."), ?isStatic:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Removes a (static) field from a given class by name. An error is thrown when `className` or `field` is invalid.
### `static[setCustomJSGenerator](#setCustomJSGenerator)(callb:[JSGenApi](jsgenapi "haxe.macro.JSGenApi - This is the api that is passed to the custom JS generator.") ‑> [Void](../../void "Void - The standard Void type.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Change the default JS output by using a custom generator callback
### `static[setFieldType](#setFieldType)(className:[String](../../string "String - The basic String class."), field:[String](../../string "String - The basic String class."), type:[String](../../string "String - The basic String class."), ?isStatic:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Set the type of a (static) field at a given class by name. An error is thrown when `className` or `field` is invalid.
### `static[setOutput](#setOutput)(fileOrDir:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Compiler.htmlComplexType
============
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a type syntax in the AST.
Values
------
### `TPath(p:[TypePath](typepath "haxe.macro.TypePath - Represents a type path in the AST."))`
Represents the type path.
### `TFunction(args:[Array](../../array "Array")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>, ret:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST."))`
Represents a function type.
See also:
* <https://haxe.org/manual/types-function.html### `TAnonymous(fields:[Array](../../array "Array")<[Field](field "haxe.macro.Field - Represents a field in the AST.")>)`
Represents an anonymous structure type.
See also:
* <https://haxe.org/manual/types-anonymous-structure.html### `TParent(t:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST."))`
Represents parentheses around a type, e.g. the `([Int](../../int) -> [Void](../../void))` part in `([Int](../../int) -> [Void](../../void)) -> [String](../../string)`.
### `TExtend(p:[Array](../../array "Array")<[TypePath](typepath "haxe.macro.TypePath - Represents a type path in the AST.")>, fields:[Array](../../array "Array")<[Field](field "haxe.macro.Field - Represents a field in the AST.")>)`
Represents typedef extensions `> [Iterable](../../iterable)<T>`. The array `p` holds the type paths to the given types.
See also:
* <https://haxe.org/manual/type-system-extensions.html### `TOptional(t:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST."))`
Represents an optional type.
### `TNamed(n:[String](../../string "String - The basic String class."), t:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST."))`
Represents a type with a name.
### `TIntersection(tl:[Array](../../array "Array")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>)`
Represents an intersection type `T1 & T2 & ... & TN`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ComplexType.htmlComplexTypeTools
=================
package [haxe.macro](index)
*Available on all platforms*
This class provides some utility methods to work with AST-level types. It is best used through `using [haxe.macro.ComplexTypeTools](complextypetools#ComplexTypeTools)` syntax and then provides additional methods on `[haxe.macro.ComplexType](complextype#ComplexType)` instances.
Static methods
--------------
### `static[toString](#toString)(c:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")):[String](../../string "String - The basic String class.")`
Converts type `c` to a human-readable `[String](../../string)` representation.
The result is guaranteed to be valid Haxe code, but there may be differences from the original lexical syntax.
### `static[toType](#toType)(c:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Type](type "haxe.macro.Type - Represents a type.")>`
*Available on macro*
Returns a type corresponding to `c`.
If `c` is null, the result is null.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ComplexTypeTools.htmlConstant
=========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a constant.
See also:
* <https://haxe.org/manual/expression-constants.htmlValues
------
### `CInt(v:[String](../../string "String - The basic String class."))`
Represents an integer literal.
### `CFloat(f:[String](../../string "String - The basic String class."))`
Represents a float literal.
### `CString(s:[String](../../string "String - The basic String class."), kind:[StringLiteralKind](stringliteralkind "haxe.macro.StringLiteralKind"))`
Represents a string literal.
### `CIdent(s:[String](../../string "String - The basic String class."))`
Represents an identifier.
### `CRegexp(r:[String](../../string "String - The basic String class."), opt:[String](../../string "String - The basic String class."))`
Represents a regular expression literal.
### Example: `~/haxe/i`
* The first argument `haxe` is a string with regular expression pattern.
* The second argument `i` is a string with regular expression flags.
See also:
* <https://haxe.org/manual/std-regex.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Constant.htmlContext
========
package [haxe.macro](index)
*Available on all platforms*
Context provides an API for macro programming.
It contains common functions that interact with the macro interpreter to query or set information. Other API functions are available in the tools classes:
* `[haxe.macro.ComplexTypeTools](complextypetools#ComplexTypeTools)`
* `[haxe.macro.ExprTools](exprtools#ExprTools)`
* `[haxe.macro.TypeTools](typetools#TypeTools)`
Static methods
--------------
### `static[addResource](#addResource)(name:[String](../../string "String - The basic String class."), data:[Bytes](../io/bytes "haxe.io.Bytes")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Makes resource `data` available as `name`.
The resource is then available using the `[haxe.macro.Resource](../resource#Resource)` API.
If a previous resource was bound to `name`, it is overwritten.
Compilation server : when using the compilation server, the resource is bound to the Haxe module which calls the macro, so it will be included again if that module is reused. If this resource concerns several modules, prefix its name with a `$` sign, this will bind it to the macro module instead.
### `static[containsDisplayPosition](#containsDisplayPosition)(pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
*Available on neko, macro*
Check if current display position is within `pos`.
### `static[currentPos](#currentPos)():[Position](position "haxe.macro.Position - Represents a position in a file.")`
*Available on neko, macro*
Returns the position at which the macro was called.
### `static[defineModule](#defineModule)(modulePath:[String](../../string "String - The basic String class."), types:[Array](../../array "Array")<[TypeDefinition](typedefinition "haxe.macro.TypeDefinition - Represents a type definition.")>, ?imports:[Array](../../array "Array")<[ImportExpr](importexpr "haxe.macro.ImportExpr - Represents the import expression.")>, ?usings:[Array](../../array "Array")<[TypePath](typepath "haxe.macro.TypePath - Represents a type path in the AST.")>):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Defines a new module as `modulePath` with several `[TypeDefinition](typedefinition#TypeDefinition)` `types`. This is analogous to defining a .hx file.
The individual `types` can reference each other and any identifier respects the `imports` and `usings` as usual, expect that imports are not allowed to have `.*` wildcards or `as s` shorthands.
### `static[defineType](#defineType)(t:[TypeDefinition](typedefinition "haxe.macro.TypeDefinition - Represents a type definition."), ?moduleDependency:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Defines a new type from `[TypeDefinition](typedefinition#TypeDefinition)` `t`.
If `moduleDependency` is given and is not `null`, it should contain a module path that will be used as a dependency for the newly defined module instead of the current module.
### `static[defined](#defined)(s:[String](../../string "String - The basic String class.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
*Available on neko, macro*
Tells if the conditional compilation flag `s` has been set.
Compiler flags are set using the `-D` command line parameter, or by calling `[haxe.macro.Compiler.define](compiler#define)`.
See also:
* <https://haxe.org/manual/lf-condition-compilation.html### `static[definedValue](#definedValue)(key:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
*Available on neko, macro*
Returns the value defined for the conditional compilation flag `key`.
If no value is defined for `key`, `null` is returned.
Compiler flags values are set using the `-D key=value` command line parameter, or by calling `[haxe.macro.Compiler.define](compiler#define)`.
The default value is `"1"`.
See also:
* <https://haxe.org/manual/lf-condition-compilation.html### `static[error](#error)(msg:[String](../../string "String - The basic String class."), pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
*Available on neko, macro*
Displays a compilation error `msg` at the given `[Position](position#Position)` `pos` and aborts the current macro call.
### `static[fatalError](#fatalError)(msg:[String](../../string "String - The basic String class."), pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
*Available on neko, macro*
Displays a compilation error `msg` at the given `[Position](position#Position)` `pos` and aborts the compilation.
### `static[filterMessages](#filterMessages)(predicate:[Message](message "haxe.macro.Message") ‑> [Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Filters all current info/warning messages. Filtered out messages will not be displayed by the compiler.
### `static[follow](#follow)(t:[Type](type "haxe.macro.Type - Represents a type."), ?once:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")):[Type](type "haxe.macro.Type - Represents a type.")`
*Available on neko, macro*
Follows a type.
See `[haxe.macro.TypeTools.follow](typetools#follow)` for details.
### `static[followWithAbstracts](#followWithAbstracts)(t:[Type](type "haxe.macro.Type - Represents a type."), once:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = false):[Type](type "haxe.macro.Type - Represents a type.")`
*Available on neko, macro*
Follows a type, including abstracts' underlying implementation
See `[haxe.macro.TypeTools.followWithAbstracts](typetools#followWithAbstracts)` for details.
### `static[getBuildFields](#getBuildFields)():[Array](../../array "Array")<[Field](field "haxe.macro.Field - Represents a field in the AST.")>`
*Available on neko, macro*
Returns an `[Array](../../array)` of fields of the class which is to be built.
This is only defined for `@:build/@:autoBuild` macros.
### `static[getCallArguments](#getCallArguments)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>>`
*Available on neko, macro*
Returns the call arguments that lead to the invocation of the current `@:genericBuild` macro, if available.
Returns `null` if the current macro is not a `@:genericBuild` macro.
### `static[getClassPath](#getClassPath)():[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
*Available on neko, macro*
Returns an `[Array](../../array)` of current class paths in the order of their declaration.
Modifying the returned array has no effect on the compiler. Class paths can be added using `[haxe.macro.Compiler.addClassPath](compiler#addClassPath)`.
### `static[getDefines](#getDefines)():[Map](../../map "Map")<[String](../../string "String - The basic String class."), [String](../../string "String - The basic String class.")>`
*Available on neko, macro*
Returns a map of all conditional compilation flags that have been set.
Compiler flags are set using the `-D` command line parameter, or by calling `[haxe.macro.Compiler.define](compiler#define)`.
Modifying the returned map has no effect on the compiler.
See also:
* <https://haxe.org/manual/lf-condition-compilation.html### `static[getExpectedType](#getExpectedType)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Type](type "haxe.macro.Type - Represents a type.")>`
*Available on neko, macro*
Returns the type which is expected at the place the macro is called.
This affects usages such as `var x:[Int](../../int) = macroCall()`, where the expected type will be reported as `[Int](../../int)`.
Might return `null` if no specific type is expected or if the calling macro is not an expression-macro.
### `static[getLocalClass](#getLocalClass)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>>`
*Available on neko, macro*
Returns the current class in which the macro was called.
If no such class exists, `null` is returned.
### `static[getLocalImports](#getLocalImports)():[Array](../../array "Array")<[ImportExpr](importexpr "haxe.macro.ImportExpr - Represents the import expression.")>`
*Available on neko, macro*
Returns an `[Array](../../array)` of all imports in the context the macro was called.
Modifying the returned array has no effect on the compiler.
### `static[getLocalMethod](#getLocalMethod)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
*Available on neko, macro*
Returns the name of the method from which the macro was called.
If no such method exists, `null` is returned.
### `static[getLocalModule](#getLocalModule)():[String](../../string "String - The basic String class.")`
*Available on neko, macro*
Returns the current module path in/on which the macro was called.
### `static[getLocalTVars](#getLocalTVars)():[Map](../../map "Map")<[String](../../string "String - The basic String class."), [TVar](tvar "haxe.macro.TVar - Represents a variable in the typed AST.")>`
*Available on neko, macro*
Similar to `getLocalVars`, but returns elements of type `[TVar](tvar#TVar)` instead of `[Type](../../type)`.
### `static[getLocalType](#getLocalType)():[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Type](type "haxe.macro.Type - Represents a type.")>`
*Available on neko, macro*
Returns the current type in/on which the macro was called.
If no such type exists, `null` is returned.
### `static[getLocalUsing](#getLocalUsing)():[Array](../../array "Array")<[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>>`
*Available on neko, macro*
Returns an `[Array](../../array)` of classes which are available for `using` usage in the context the macro was called.
Modifying the returned array has no effect on the compiler.
### `static[getLocalVars](#getLocalVars)():[Map](../../map "Map")<[String](../../string "String - The basic String class."), [Type](type "haxe.macro.Type - Represents a type.")>`
**Deprecated:** "Use Context.getLocalTVars() instead"
*Available on neko, macro*
Returns a map of local variables accessible in the context the macro was called.
The keys of the returned map are the variable names, the values are their types.
Modifying the returned map has no effect on the compiler.
### `static[getMessages](#getMessages)():[Array](../../array "Array")<[Message](message "haxe.macro.Message")>`
*Available on neko, macro*
Gets a list of all current compilation info/warning messages.
### `static[getModule](#getModule)(name:[String](../../string "String - The basic String class.")):[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>`
*Available on neko, macro*
Resolves a module identified by `name` and returns an `[Array](../../array)` of all its contained types.
The resolution follows the usual class path rules where the last declared class path has priority.
If no module can be found, an exception of type `[String](../../string)` is thrown.
### `static[getPosInfos](#getPosInfos)(p:[Position](position "haxe.macro.Position - Represents a position in a file.")):{min:[Int](../../int "Int - The standard Int type."), max:[Int](../../int "Int - The standard Int type."), file:[String](../../string "String - The basic String class.")}`
*Available on neko, macro*
Returns the information stored in `[Position](position#Position)` `p`.
### `static[getResources](#getResources)():[Map](../../map "Map")<[String](../../string "String - The basic String class."), [Bytes](../io/bytes "haxe.io.Bytes")>`
*Available on neko, macro*
Returns a map of all registered resources for this compilation unit.
Modifying the returned map has no effect on the compilation, use `[haxe.macro.Context.addResource](context#addResource)` to add new resources to the compilation unit.
### `static[getType](#getType)(name:[String](../../string "String - The basic String class.")):[Type](type "haxe.macro.Type - Represents a type.")`
*Available on neko, macro*
Resolves a type identified by `name`.
The resolution follows the usual class path rules where the last declared class path has priority.
If no type can be found, an exception of type `[String](../../string)` is thrown.
### `static[getTypedExpr](#getTypedExpr)(t:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
*Available on neko, macro*
Returns a syntax-level expression corresponding to typed expression `t`.
This process may lose some information.
### `static[info](#info)(msg:[String](../../string "String - The basic String class."), pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Displays a compilation info `msg` at the given `[Position](position#Position)` `pos`.
### `static[makeExpr](#makeExpr)(v:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types."), pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
*Available on neko, macro*
Builds an expression from `v`.
This method generates AST nodes depending on the macro-runtime value of `v`. As such, only basic types and enums are supported and the behavior for other types is undefined.
The provided `[Position](position#Position)` `pos` is used for all generated inner AST nodes.
### `static[makePosition](#makePosition)(inf:{min:[Int](../../int "Int - The standard Int type."), max:[Int](../../int "Int - The standard Int type."), file:[String](../../string "String - The basic String class.")}):[Position](position "haxe.macro.Position - Represents a position in a file.")`
*Available on neko, macro*
Builds a `[Position](position#Position)` from `inf`.
### `static[onAfterGenerate](#onAfterGenerate)(callback:() ‑> [Void](../../void "Void - The standard Void type.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Adds a callback function `callback` which is invoked after the compiler generation phase.
Compilation has completed at this point and cannot be influenced anymore. However, contextual information is still available.
*Note*: the callback is still invoked when generation is disabled with `--no-output`.
### `static[onAfterTyping](#onAfterTyping)(callback:[Array](../../array "Array")<[ModuleType](moduletype "haxe.macro.ModuleType - Represents a module type.")> ‑> [Void](../../void "Void - The standard Void type.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Adds a callback function `callback` which is invoked after the compiler is done typing, but before optimization. The callback receives the types which have been typed.
It is possible to define new types in the callback, in which case it will be called again with the new types as argument.
### `static[onGenerate](#onGenerate)(callback:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")> ‑> [Void](../../void "Void - The standard Void type."), persistent:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Adds a callback function `callback` which is invoked after the compiler's typing phase, just before its generation phase.
The callback receives an `[Array](../../array)` containing all types which are about to be generated. Modifications are limited to metadata, it is mainly intended to obtain information.
By default, the callback is made before types are stored in the compilation server, if active. This means that any effect persists for the next compilation. If `persistent` is set to `[false](../../bool)`, changes to types made by the callback only affect the current compilation. If no compilation server is used, this flag has no effect.
*Note*: the callback is still invoked when generation is disabled with `--no-output`.
### `static[onMacroContextReused](#onMacroContextReused)(callb:() ‑> [Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")):[Void](../../void "Void - The standard Void type.")`
**Deprecated:**
*Available on neko, macro*
### `static[onTypeNotFound](#onTypeNotFound)(callback:[String](../../string "String - The basic String class.") ‑> [TypeDefinition](typedefinition "haxe.macro.TypeDefinition - Represents a type definition.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Adds a callback function `callback` which is invoked when a type name cannot be resolved.
The callback may return a type definition, which is then used for the expected type. If it returns `null`, the type is considered to still not exist.
### `static[parse](#parse)(expr:[String](../../string "String - The basic String class."), pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
*Available on neko, macro*
Parses `expr` as Haxe code, returning the corresponding AST.
String interpolation of single quote strings within `expr` is not supported.
The provided `[Position](position#Position)` `pos` is used for all generated inner AST nodes.
### `static[parseInlineString](#parseInlineString)(expr:[String](../../string "String - The basic String class."), pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
*Available on neko, macro*
Similar to `parse`, but error positions are reported within the provided String `expr`.
### `static[registerModuleDependency](#registerModuleDependency)(modulePath:[String](../../string "String - The basic String class."), externFile:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Manually adds a dependency between module `modulePath` and an external file `externFile`.
This affects the compilation cache, causing the module to be typed if `externFile` has changed.
Has no effect if the compilation cache is not used.
### `static[registerModuleReuseCall](#registerModuleReuseCall)(modulePath:[String](../../string "String - The basic String class."), macroCall:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
**Deprecated:**
*Available on neko, macro*
### `static[resolvePath](#resolvePath)(file:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
*Available on neko, macro*
Resolves a file name `file` based on the current class paths.
The resolution follows the usual class path rules where the last declared class path has priority.
If a class path was declared relative, this method returns the relative file path. Otherwise it returns the absolute file path.
### `static[resolveType](#resolveType)(t:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST."), p:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Type](type "haxe.macro.Type - Represents a type.")`
*Available on neko, macro*
Resolve type `t` and returns the corresponding `[Type](../../type)`.
Resolving the type may result in a compiler error which can be caught using `try ... catch`. Resolution is performed based on the current context in which the macro is called.
### `static[signature](#signature)(v:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[String](../../string "String - The basic String class.")`
*Available on neko, macro*
Returns a hashed MD5 signature of value `v`.
### `static[storeExpr](#storeExpr)(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
*Available on neko, macro*
Types expression `e`, stores the resulting typed expression internally and returns a syntax-level expression that can be returned from a macro and will be replaced by the stored typed expression.
If `e` is `null` or invalid, an exception is thrown.
A call to `storeExpr(e)` is equivalent to `storeTypedExpr(typeExpr(e))` without the overhead of encoding and decoding between regular and macro runtime.
NOTE: the returned value references an internally stored typed expression that is reset between compilations, so care should be taken when storing the expression returned by this method in a static variable and using the compilation server.
### `static[storeTypedExpr](#storeTypedExpr)(t:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
*Available on neko, macro*
Store typed expression `t` internally and give a syntax-level expression that can be returned from a macro and will be replaced by the stored typed expression.
If `t` is `null` or invalid, an exception is thrown.
NOTE: the returned value references an internally stored typed expression that is reset between compilations, so care should be taken when storing the expression returned by this method in a static variable and using the compilation server.
### `static[timer](#timer)(id:[String](../../string "String - The basic String class.")):() ‑> [Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Creates a timer which will be printed in the compilation report if `--times` compilation argument is set.
Note that a timer may be omitted from the report if the amount of time measured is too small.
This method immediately starts a timer and returns a function to stop it:
```
var stopTimer = haxe.macro.Context.timer("my heavy task");
runTask();
stopTimer();
```
### `static[toComplexType](#toComplexType)(t:[Type](type "haxe.macro.Type - Represents a type.")):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>`
*Available on neko, macro*
Returns the `[ComplexType](complextype#ComplexType)` corresponding to the given `[Type](../../type)` `t`.
See `[haxe.macro.TypeTools.toComplexType](typetools#toComplexType)` for details.
### `static[typeExpr](#typeExpr)(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")):[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")`
*Available on neko, macro*
Types expression `e` and returns the corresponding `[TypedExpr](typedexpr#TypedExpr)`.
Typing the expression may result in a compiler error which can be caught using `try ... catch`.
### `static[typeof](#typeof)(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")):[Type](type "haxe.macro.Type - Represents a type.")`
*Available on neko, macro*
Types expression `e` and returns its type.
Typing the expression may result in a compiler error which can be caught using `try ... catch`.
### `static[unify](#unify)(t1:[Type](type "haxe.macro.Type - Represents a type."), t2:[Type](type "haxe.macro.Type - Represents a type.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
*Available on neko, macro*
Tries to unify `t1` and `t2` and returns `[true](../../bool)` if successful.
### `static[warning](#warning)(msg:[String](../../string "String - The basic String class."), pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Void](../../void "Void - The standard Void type.")`
*Available on neko, macro*
Displays a compilation warning `msg` at the given `[Position](position#Position)` `pos`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Context.htmlContextOptions([Int](../../int "Int - The standard Int type."))
================================================================
package [haxe.macro](index)
import [haxe.macro.CompilationServer](compilationserver)
*Available on all platforms*
Variables
---------
### `inlineread only[MacroContext](#MacroContext):[ContextOptions](contextoptions "haxe.macro.ContextOptions") = 1`
Affects only the macro context.
### `inlineread only[NormalAndMacroContext](#NormalAndMacroContext):[ContextOptions](contextoptions "haxe.macro.ContextOptions") = 2`
Affects the normal and macro contexts.
### `inlineread only[NormalContext](#NormalContext):[ContextOptions](contextoptions "haxe.macro.ContextOptions") = 0`
Affects only the normal context.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ContextOptions.htmlDisplayKind
============
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Values
------
### `DKCall`
### `DKDot`
### `DKStructure`
### `DKMarked`
### `DKPattern(outermost:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."))`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/DisplayKind.htmlEnumField
==========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents an enum constructor.
Fields
------
### `[type](#type):[Type](type "haxe.macro.Type - Represents a type.")`
The type of the enum constructor.### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the enum constructor.### `[params](#params):[Array](../../array "Array")<[TypeParameter](typeparameter "haxe.macro.TypeParameter - Represents the declaration of type parameters.")>`
The type parameters of the enum constructor.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the enum constructor.### `[meta](#meta):[MetaAccess](metaaccess "haxe.macro.MetaAccess - MetaAccess is a wrapper for the Metadata array.")`
The metadata of the enum constructor.### `[index](#index):[Int](../../int "Int - The standard Int type.")`
The index of the enum constructor, i.e. in which position it appears in the syntax.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The associated documentation of the enum constructor.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/EnumField.htmlError
======
package [haxe.macro](index)
extends [Exception](../exception "haxe.Exception - Base class for exceptions.")
import [haxe.macro.Expr](expr)
*Available on all platforms*
This error can be used to handle or produce compilation errors in macros.
Constructor
-----------
### `[new](#new)(message:[String](../../string "String - The basic String class."), pos:[Position](position "haxe.macro.Position - Represents a position in a file."), ?previous:[Exception](../exception "haxe.Exception - Base class for exceptions."))`
Instantiates an error with given message and position.
Variables
---------
### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the error.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Error.htmlExampleJSGenerator
===================
package [haxe.macro](index)
*Available on macro*
Static methods
--------------
### `static[use](#use)():[Void](../../void "Void - The standard Void type.")`
Constructor
-----------
### `[new](#new)(api:[JSGenApi](jsgenapi "haxe.macro.JSGenApi - This is the api that is passed to the custom JS generator."))`
Methods
-------
### `[generate](#generate)():[Void](../../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ExampleJSGenerator.htmlExpr
=====
package [haxe.macro](index)
*Available on all platforms*
Represents a node in the AST.
See also:
* <https://haxe.org/manual/macro-reification-expression.htmlFields
------
### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the expression.### `[expr](#expr):[ExprDef](exprdef "haxe.macro.ExprDef - Represents the kind of a node in the AST.")`
The expression kind.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Expr.htmlExprArrayTools
===============
package [haxe.macro](index)
import [haxe.macro.ExprTools](exprtools)
*Available on all platforms*
This class provides functions on expression arrays for convenience. For a detailed reference on each method, see the documentation of ExprTools.
Static methods
--------------
### `static[iter](#iter)(el:[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>, f:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.") ‑> [Void](../../void "Void - The standard Void type.")):[Void](../../void "Void - The standard Void type.")`
### `static[map](#map)(el:[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>, f:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.") ‑> [Expr](expr "haxe.macro.Expr - Represents a node in the AST.")):[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ExprArrayTools.htmlExprDef
========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents the kind of a node in the AST.
Values
------
### `EConst(c:[Constant](constant "haxe.macro.Constant - Represents a constant."))`
A constant.
### `EArray(e1:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), e2:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
Array access `e1[e2]`.
### `EBinop(op:[Binop](binop "haxe.macro.Binop - A binary operator."), e1:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), e2:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
Binary operator `e1 op e2`.
### `EField(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), field:[String](../../string "String - The basic String class."))`
Field access on `e.field`.
### `EParenthesis(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
Parentheses `(e)`.
### `EObjectDecl(fields:[Array](../../array "Array")<[ObjectField](objectfield "haxe.macro.ObjectField - Represents the field of an object declaration.")>)`
An object declaration.
### `EArrayDecl(values:[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>)`
An array declaration `[el]`.
### `ECall(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), params:[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>)`
A call `e(params)`.
### `ENew(t:[TypePath](typepath "haxe.macro.TypePath - Represents a type path in the AST."), params:[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>)`
A constructor call `new t(params)`.
### `EUnop(op:[Unop](unop "haxe.macro.Unop - A unary operator."), postFix:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."), e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
An unary operator `op` on `e`:
* `e++` (`op = OpIncrement, postFix = [true](../../bool)`)
* `e--` (`op = OpDecrement, postFix = [true](../../bool)`)
* `++e` (`op = OpIncrement, postFix = [false](../../bool)`)
* `--e` (`op = OpDecrement, postFix = [false](../../bool)`)
* `-e` (`op = OpNeg, postFix = [false](../../bool)`)
* `!e` (`op = OpNot, postFix = [false](../../bool)`)
* `~e` (`op = OpNegBits, postFix = [false](../../bool)`)
### `EVars(vars:[Array](../../array "Array")<[Var](var "haxe.macro.Var - Represents a variable in the AST.")>)`
Variable declarations.
### `EFunction(kind:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[FunctionKind](functionkind "haxe.macro.FunctionKind")>, f:[Function](function "haxe.macro.Function - Represents a function in the AST."))`
A function declaration.
### `EBlock(exprs:[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>)`
A block of expressions `{exprs}`.
### `EFor(it:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), expr:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
A `for` expression.
### `EIf(econd:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), eif:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), eelse:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>)`
An `if (econd) eif` or `if (econd) eif else eelse` expression.
### `EWhile(econd:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), normalWhile:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."))`
Represents a `while` expression.
When `normalWhile` is `[true](../../bool)` it is `while (...)`.
When `normalWhile` is `[false](../../bool)` it is `do {...} while (...)`.
### `ESwitch(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), cases:[Array](../../array "Array")<[Case](case "haxe.macro.Case - Represents a switch case.")>, edef:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>)`
Represents a `switch` expression with related cases and an optional. `default` case if `edef != null`.
### `ETry(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), catches:[Array](../../array "Array")<[Catch](catch "haxe.macro.Catch - Represents a catch in the AST.")>)`
Represents a `try`-expression with related catches.
### `EReturn(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
A `return` or `return e` expression.
### `EBreak`
A `break` expression.
### `EContinue`
A `continue` expression.
### `EUntyped(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
An `untyped e` source code.
### `EThrow(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
A `throw e` expression.
### `ECast(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), t:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>)`
A `cast e` or `cast (e, m)` expression.
### `EDisplay(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), displayKind:[DisplayKind](displaykind "haxe.macro.DisplayKind"))`
Used internally to provide completion.
### `EDisplayNew(t:[TypePath](typepath "haxe.macro.TypePath - Represents a type path in the AST."))`
Used internally to provide completion.
### `ETernary(econd:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), eif:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), eelse:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
A `(econd) ? eif : eelse` expression.
### `ECheckType(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), t:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST."))`
A `(e:t)` expression.
### `EMeta(s:[MetadataEntry](metadataentry "haxe.macro.MetadataEntry - Represents a metadata entry in the AST."), e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
A `@m e` expression.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ExprDef.htmlExprOf<T>
==========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a AST node identical to `[Expr](expr#Expr)`, but it allows constraining the type of accepted expressions.
See also:
* <https://haxe.org/manual/macro-ExprOf.htmlAlias
-----
*alias for* `[haxe.macro.Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ExprOf.htmlExprTools
==========
package [haxe.macro](index)
*Available on all platforms*
This class provides some utility methods to work with expressions. It is best used through 'using haxe.macro.ExprTools' syntax and then provides additional methods on haxe.macro.Expr instances.
While mainly intended to be used in macros, it works in non-macro code as well.
Static methods
--------------
### `static[getValue](#getValue)(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")):[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")`
Returns the value `e` represents.
Supported expressions are:
* `[Int](../../int)`, `[Float](../../float)` and `[String](../../string)` literals
* identifiers `[true](../../bool)`, `[false](../../bool)` and `null`
* structure declarations if all their fields are values
* array declarations if all their elements are values
* unary operators `-`, `!` and `~` if the operand is a value
* binary operators except `=>`, `...` and assignments
Parentheses, metadata and the `untyped` keyword are ignored.
If any non-value is encountered, an exception of type `[String](../../string)` is thrown.
If `e` is null, the result is unspecified.
### `static[iter](#iter)(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), f:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.") ‑> [Void](../../void "Void - The standard Void type.")):[Void](../../void "Void - The standard Void type.")`
Calls function `f` on each sub-expression of `e`.
If `e` has no sub-expressions, this operation has no effect.
Otherwise `f` is called once per sub-expression of `e`, with the sub-expression as argument. These calls are done in order of the sub-expression declarations.
This method does not call itself recursively. It should instead be used in a recursive function which handles the expression nodes of interest.
Usage example:
```
function findStrings(e:Expr) {
switch(e.expr) {
case EConst(CString(s)):
// handle s
case _:
ExprTools.iter(e, findStrings);
}
}
```
### `static[map](#map)(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."), f:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.") ‑> [Expr](expr "haxe.macro.Expr - Represents a node in the AST.")):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
Transforms the sub-expressions of `e` by calling `f` on each of them.
If `e` has no sub-expressions, this operation returns `e` unchanged.
Otherwise `f` is called once per sub-expression of `e`, with the sub-expression as argument. These calls are done in order of the sub-expression declarations.
This method does not call itself recursively. It should instead be used in a recursive function which handles the expression nodes of interest.
Usage example:
```
function capitalizeStrings(e:Expr) {
return switch(e.expr) {
case EConst(CString(s)):
{ expr: EConst(CString(s.toUpperCase())), pos: e.pos };
case _:
ExprTools.map(e, capitalizeStrings);
}
}
```
### `static[toString](#toString)(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")):[String](../../string "String - The basic String class.")`
Converts expression `e` to a human-readable String representation.
The result is guaranteed to be valid Haxe code, but there may be differences from the original lexical syntax.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ExprTools.htmlFieldType
==========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents the field type in the AST.
Values
------
### `FVar(t:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>, e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
Represents a variable field type.
### `FFun(f:[Function](function "haxe.macro.Function - Represents a function in the AST."))`
Represents a function field type.
### `FProp(get:[String](../../string "String - The basic String class."), set:[String](../../string "String - The basic String class."), t:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST."), e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
Represents a property with getter and setter field type.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/FieldType.htmlImportMode
===========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents the import mode.
See also:
* <https://haxe.org/manual/type-system-import.htmlValues
------
### `INormal`
Represents a default import `import c`.
### `IAsName(alias:[String](../../string "String - The basic String class."))`
Represents the alias import `import c as alias`.
### `IAll`
Represents the wildcard import `import *`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ImportMode.htmlField
======
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a field in the AST.
Fields
------
### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the field.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the field.### `optional[meta](#meta):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Metadata](metadata "haxe.macro.Metadata - Represents metadata in the AST.")>`
The optional metadata of the field.### `[kind](#kind):[FieldType](fieldtype "haxe.macro.FieldType - Represents the field type in the AST.")`
The kind of the field.### `optional[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The documentation of the field, if available. If the field has no documentation, the value is `null`.### `optional[access](#access):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[Access](access "haxe.macro.Access - Represents an access modifier.")>>`
The access modifiers of the field. By default fields have private access. @see https://haxe.org/manual/class-field-access-modifier.html
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Field.htmlFieldAccess
============
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents the kind of field access in the typed AST.
Values
------
### `FInstance(c:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>, params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>, cf:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>)`
Access of field `cf` on a class instance `c` with type parameters `params`.
### `FStatic(c:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>, cf:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>)`
Static access of a field `cf` on a class `c`.
### `FAnon(cf:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>)`
Access of field `cf` on an anonymous structure.
### `FDynamic(s:[String](../../string "String - The basic String class."))`
Dynamic field access of a field named `s`.
### `FClosure(c:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>, c:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>}>, cf:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>)`
Closure field access of field `cf` on a class instance `c` with type parameters `params`.
### `FEnum(e:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[EnumType](enumtype "haxe.macro.EnumType - Represents an enum type.")>, ef:[EnumField](enumfield "haxe.macro.EnumField - Represents an enum constructor."))`
Field access to an enum constructor `ef` of enum `e`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/FieldAccess.htmlFieldKind
==========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents a field kind.
Values
------
### `FVar(read:[VarAccess](varaccess "haxe.macro.VarAccess - Represents the variable accessor."), write:[VarAccess](varaccess "haxe.macro.VarAccess - Represents the variable accessor."))`
A variable of property, depending on the `read` and `write` values.
### `FMethod(k:[MethodKind](methodkind "haxe.macro.MethodKind - Represents the method kind."))`
A method
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/FieldKind.htmlFormat
=======
package [haxe.macro](index)
*Available on all platforms*
The actual macro implemented for Std.format
Static methods
--------------
### `static[format](#format)(estr:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")):{pos:[Position](position "haxe.macro.Position - Represents a position in a file."), expr:[ExprDef](exprdef "haxe.macro.ExprDef - Represents the kind of a node in the AST.")}`
*Available on macro*
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Format.htmlFunction
=========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a function in the AST.
Fields
------
### `[ret](#ret):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>`
The return type-hint of the function, if available.### `optional[params](#params):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[TypeParamDecl](typeparamdecl "haxe.macro.TypeParamDecl - Represents a type parameter declaration in the AST.")>>`
An optional list of function parameter type declarations.### `[expr](#expr):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>`
The expression of the function body, if available.### `[args](#args):[Array](../../array "Array")<[FunctionArg](functionarg "haxe.macro.FunctionArg - Represents a function argument in the AST.")>`
A list of function arguments.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Function.htmlFunctionArg
============
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a function argument in the AST.
Fields
------
### `optional[value](#value):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>`
The optional value of the function argument, if available.### `[type](#type):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>`
The type-hint of the function argument, if available.### `optional[opt](#opt):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
Whether or not the function argument is optional.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the function argument.### `optional[meta](#meta):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Metadata](metadata "haxe.macro.Metadata - Represents metadata in the AST.")>`
The metadata of the function argument.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/FunctionArg.htmlFunctionKind
=============
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents function kind in the AST
Values
------
### `FAnonymous`
Anonymous function
### `FNamed(name:[String](../../string "String - The basic String class."), inlined:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."))`
Named function
### `FArrow`
Arrow function
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/FunctionKind.htmlImportExpr
===========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents the import expression.
Fields
------
### `[path](#path):[Array](../../array "Array")<{pos:[Position](position "haxe.macro.Position - Represents a position in a file."), name:[String](../../string "String - The basic String class.")}>`
The path to the import expression.### `[mode](#mode):[ImportMode](importmode "haxe.macro.ImportMode - Represents the import mode.")`
The mode of the import expression.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ImportExpr.htmlIncludePosition([String](../../string "String - The basic String class."))
===========================================================================
package [haxe.macro](index)
from [String](../../string "String - The basic String class.") to [String](../../string "String - The basic String class.")
import [haxe.macro.Compiler](compiler)
*Available on all platforms*
Variables
---------
### `inlineread only[Closure](#Closure):[IncludePosition](includeposition "haxe.macro.IncludePosition") = "closure"`
Prepend the file content to the body of the top-level closure.
Since the closure is in strict-mode, there may be run-time error if the input is not strict-mode-compatible.
### `inlineread only[Inline](#Inline):[IncludePosition](includeposition "haxe.macro.IncludePosition") = "inline"`
Directly inject the file content at the call site.
### `inlineread only[Top](#Top):[IncludePosition](includeposition "haxe.macro.IncludePosition") = "top"`
Prepend the file content to the output file.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/IncludePosition.htmlJSGenApi
=========
package [haxe.macro](index)
*Available on all platforms*
This is the api that is passed to the custom JS generator.
Fields
------
### `[types](#types):[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>`
all the types that were compiled by Haxe### `[setTypeAccessor](#setTypeAccessor)(callb:[Type](type "haxe.macro.Type - Represents a type.") ‑> [String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
define the JS code that gets generated when a class or enum is accessed in a typed expression### `[setCurrentClass](#setCurrentClass)(c:[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")):[Void](../../void "Void - The standard Void type.")`
select the current classe### `[quoteString](#quoteString)(s:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
quote and escape the given string constant### `[outputFile](#outputFile):[String](../../string "String - The basic String class.")`
the file in which the JS code can be generated### `[main](#main):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>`
the main call expression, if a -main class is defined### `[isKeyword](#isKeyword)(ident:[String](../../string "String - The basic String class.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
tells if the given identifier is a JS keyword### `[hasFeature](#hasFeature)(f:[String](../../string "String - The basic String class.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
check if a feature is used### `[generateValue](#generateValue)(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")):[String](../../string "String - The basic String class.")`
generate the JS code for a given typed expression-value### `[generateStatement](#generateStatement)(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")):[String](../../string "String - The basic String class.")`
generate the JS code for any given typed expression### `[buildMetaData](#buildMetaData)(t:[BaseType](basetype "haxe.macro.BaseType - The information that all types (ClassType, EnumType, DefType, AbstractType) have in common.")):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>`
create the metadata expression for the given type### `[addFeature](#addFeature)(f:[String](../../string "String - The basic String class.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
add a feature
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/JSGenApi.htmlMacroStringTools
=================
package [haxe.macro](index)
*Available on all platforms*
This class provides some utility methods to work with strings in macro context.
Static methods
--------------
### `static[formatString](#formatString)(s:[String](../../string "String - The basic String class."), pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):Unknown`
*Available on macro*
Formats `[String](../../string)` `s` using the usual interpolation rules.
The returned expression is a concatenation of string parts and escaped elements.
### `static[isFormatExpr](#isFormatExpr)(e:[ExprOf](exprof "haxe.macro.ExprOf - Represents a AST node identical to Expr, but it allows constraining the type of accepted expressions.")<[String](../../string "String - The basic String class.")>):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
*Available on macro*
Tells if `e` is a format string, i.e. uses single quotes `'` as delimiters.
This only works if `e` has a position which the compiler can find. While this is true for any expressions appearing in real Haxe code (i.e. some .hx file), it might not work for expressions generated by macros.
This operation depends on the position of `e`.
### `static[toComplex](#toComplex)(path:[String](../../string "String - The basic String class.")):[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")`
### `static[toDotPath](#toDotPath)(pack:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>, name:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
Converts a path given by package `pack` and name `name` to a `[String](../../string)` separated by dots.
If `pack` has no elements, the result is `name`.
If `pack` is null, the result is unspecified.
Otherwise the elements of `pack` are joined with a separating dot, with an appended dot separating the result from `name`.
### `static[toFieldExpr](#toFieldExpr)(sl:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>, ?pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
Converts an array of Strings `sl` to a field expression.
If `sl` has no elements, the result is null.
If `sl` has one element, the result is `EConst(CIdent(sl[0])`.
Otherwise the result is a chain of `EField` nodes.
If `sl` is null, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/MacroStringTools.htmlMacroType<Const>
=================
package [haxe.macro](index)
*Available on all platforms*
This type is meant to be used to generate custom types using a macro. For instance by doing MacroType<[my.Class.myMacro(55)] © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/MacroType.htmlMessage
========
package [haxe.macro](index)
import [haxe.macro.Context](context)
*Available on all platforms*
Values
------
### `Info(msg:[String](../../string "String - The basic String class."), pos:[Position](position "haxe.macro.Position - Represents a position in a file."))`
### `Warning(msg:[String](../../string "String - The basic String class."), pos:[Position](position "haxe.macro.Position - Represents a position in a file."))`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Message.htmlMetaAccess
===========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
MetaAccess is a wrapper for the `[Metadata](metadata#Metadata)` array. It can be used to add metadata to and remove metadata from its origin.
Fields
------
### `[remove](#remove)(name:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
Removes all `name` metadata entries from the origin of `this` MetaAccess. This method might clear several metadata entries of the same name. If a `Metadata` array is obtained through a call to `get`, a subsequent call to `remove` has no effect on that array. If `name` is null, compilation fails with an error.### `[has](#has)(name:[String](../../string "String - The basic String class.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if the origin of `this` MetaAccess has a `name` metadata entry. If `name` is null, compilation fails with an error.### `[get](#get)():[Metadata](metadata "haxe.macro.Metadata - Represents metadata in the AST.")`
Return the wrapped `Metadata` array. Modifying this array has no effect on the origin of `this` MetaAccess. The `add` and `remove` methods can be used for that.### `[extract](#extract)(name:[String](../../string "String - The basic String class.")):[Array](../../array "Array")<[MetadataEntry](metadataentry "haxe.macro.MetadataEntry - Represents a metadata entry in the AST.")>`
Extract metadata entries by given `name`. If there's no metadata with such name, empty array `[]` is returned. If `name` is null, compilation fails with an error.### `[add](#add)(name:[String](../../string "String - The basic String class."), params:[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>, pos:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Void](../../void "Void - The standard Void type.")`
Adds the metadata specified by `name`, `params` and `pos` to the origin of `this` MetaAccess. Metadata names are not unique during compilation, so this method never overwrites a previous metadata. If a `Metadata` array is obtained through a call to `get`, a subsequent call to `add` has no effect on that array. If any argument is null, compilation fails with an error.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/MetaAccess.htmlMetadataEntry
==============
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a metadata entry in the AST.
Fields
------
### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the metadata entry.### `optional[params](#params):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>>`
The optional parameters of the metadata entry.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the metadata entry.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/MetadataEntry.htmlMethodKind
===========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents the method kind.
Values
------
### `MethNormal`
A normal method.
### `MethInline`
An inline method.
See also:
* <https://haxe.org/manual/class-field-inline.html### `MethDynamic`
A dynamic, rebindable method.
See also:
* <https://haxe.org/manual/class-field-dynamic.html### `MethMacro`
A macro method.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/MethodKind.htmlModuleCheckPolicy([Int](../../int "Int - The standard Int type."))
===================================================================
package [haxe.macro](index)
import [haxe.macro.CompilationServer](compilationserver)
*Available on all platforms*
Variables
---------
### `inlineread only[CheckFileContentModification](#CheckFileContentModification):[ModuleCheckPolicy](modulecheckpolicy "haxe.macro.ModuleCheckPolicy") = 1`
If a file is modified, also checks if its content changed. This check is not free, but useful when .hx files are auto-generated.
### `inlineread only[NoCheckDependencies](#NoCheckDependencies):[ModuleCheckPolicy](modulecheckpolicy "haxe.macro.ModuleCheckPolicy") = 2`
Disables dependency checks of the module.
This should only be used for modules that don't depend on any module that might change. It is effectively a promise to the compiler that the module is unaffected by changes made to other modules. If that promise is broken, the compiler is sad and things probably stop working.
### `inlineread only[NoCheckFileTimeModification](#NoCheckFileTimeModification):[ModuleCheckPolicy](modulecheckpolicy "haxe.macro.ModuleCheckPolicy") = 0`
Disables file modification checks, avoiding some filesystem operations.
### `inlineread only[NoCheckShadowing](#NoCheckShadowing):[ModuleCheckPolicy](modulecheckpolicy "haxe.macro.ModuleCheckPolicy") = 3`
Disables file shadowing checks. Shadowing can occur when a new file is added to a class-path that has higher priority than the class-path of the current module file.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ModuleCheckPolicy.htmlMetadata
=========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents metadata in the AST.
Alias
-----
*alias for* `[Array](../../array "Array")<[haxe.macro.MetadataEntry](metadataentry "haxe.macro.MetadataEntry - Represents a metadata entry in the AST.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Metadata.htmlModuleType
===========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents a module type. These are the types that can be declared in a Haxe module and which are passed to the generators (except `TTypeDecl`).
Values
------
### `TClassDecl(c:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>)`
A class.
### `TEnumDecl(e:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[EnumType](enumtype "haxe.macro.EnumType - Represents an enum type.")>)`
An enum.
### `TTypeDecl(t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[DefType](deftype "haxe.macro.DefType - Represents a typedef.")>)`
A typedef.
### `TAbstract(a:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[AbstractType](abstracttype "haxe.macro.AbstractType - Represents an abstract type.")>)`
An abstract.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ModuleType.htmlNullSafetyMode([String](../../string "String - The basic String class."))
==========================================================================
package [haxe.macro](index)
to [String](../../string "String - The basic String class.")
import [haxe.macro.Compiler](compiler)
*Available on all platforms*
Variables
---------
### `inlineread only[Loose](#Loose):[NullSafetyMode](nullsafetymode "haxe.macro.NullSafetyMode") = "Loose"`
Loose safety. If an expression is checked `!= null`, then it's considered safe even if it could be modified after the check. E.g.
```
function example(o:{field:Null<String>}) {
if(o.field != null) {
mutate(o);
var notNullable:String = o.field; //no error
}
}
function mutate(o:{field:Null<String>}) {
o.field = null;
}
```
### `inlineread only[Off](#Off):[NullSafetyMode](nullsafetymode "haxe.macro.NullSafetyMode") = "Off"`
Disable null safety.
### `inlineread only[Strict](#Strict):[NullSafetyMode](nullsafetymode "haxe.macro.NullSafetyMode") = "Strict"`
Full scale null safety. If a field is checked `!= null` it stays safe until a call is made or any field of any object is reassigned, because that could potentially alter an object of the checked field. E.g.
```
function example(o:{field:Null<String>}, b:{o:{field:Null<String>}}) {
if(o.field != null) {
var notNullable:String = o.field; //no error
someCall();
var notNullable:String = o.field; // Error!
}
if(o.field != null) {
var notNullable:String = o.field; //no error
b.o = {field:null};
var notNullable:String = o.field; // Error!
}
}
```
### `inlineread only[StrictThreaded](#StrictThreaded):[NullSafetyMode](nullsafetymode "haxe.macro.NullSafetyMode") = "StrictThreaded"`
Full scale null safety for a multi-threaded environment. With this mode checking a field `!= null` does not make it safe, because it could be changed from another thread at the same time or immediately after the check. The only nullable thing could be safe are local variables.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/NullSafetyMode.htmlQuoteStatus
============
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents the way something is quoted.
Values
------
### `Unquoted`
No quotes
### `Quoted`
### Double quotes `"`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/QuoteStatus.htmlObjectField
============
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents the field of an object declaration.
Fields
------
### `optional[quotes](#quotes):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[QuoteStatus](quotestatus "haxe.macro.QuoteStatus - Represents the way something is quoted.")>`
How the field name is quoted.### `[field](#field):[String](../../string "String - The basic String class.")`
The name of the field.### `[expr](#expr):[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")`
The field expression.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/ObjectField.htmlPosition
=========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a position in a file.
Fields
------
### `[min](#min):[Int](../../int "Int - The standard Int type.")`
Position of the first character.### `[max](#max):[Int](../../int "Int - The standard Int type.")`
Position of the last character.### `[file](#file):[String](../../string "String - The basic String class.")`
Reference to the filename.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Position.htmlPositionTools
==============
package [haxe.macro](index)
*Available on all platforms*
Static methods
--------------
### `static[getInfos](#getInfos)(p:[Position](position "haxe.macro.Position - Represents a position in a file.")):{min:[Int](../../int "Int - The standard Int type."), max:[Int](../../int "Int - The standard Int type."), file:[String](../../string "String - The basic String class.")}`
Like `[Context.getPosInfos](context#getPosInfos)`, except this method is available on all platforms.
### `static[here](#here)():[Position](position "haxe.macro.Position - Represents a position in a file.")`
Returns the `[Position](position#Position)` where the caller of `here` is.
### `static[make](#make)(inf:{min:[Int](../../int "Int - The standard Int type."), max:[Int](../../int "Int - The standard Int type."), file:[String](../../string "String - The basic String class.")}):[Position](position "haxe.macro.Position - Represents a position in a file.")`
Like `[Context.makePosition](context#makePosition)`, except this method is available on all platforms.
### `static[toLocation](#toLocation)(p:[Position](position "haxe.macro.Position - Represents a position in a file.")):[Location](../display/location "haxe.display.Location - Represents a location inside a resource, such as a line inside a text file.")`
*Available on macro*
Converts a `[haxe.macro.Position](position#Position)` to a `[haxe.display.Position.Location](position#Location)`.
This operation requires the source file the be known to the Haxe lexer in order to determine line breaks. It is thus only available in macro context.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/PositionTools.htmlPrinter
========
package [haxe.macro](index)
*Available on all platforms*
This class provides some utility methods to convert elements from the macro context to a human-readable String representation.
Constructor
-----------
### `[new](#new)(tabString:[String](../../string "String - The basic String class.") = "\t")`
Methods
-------
### `[printAccess](#printAccess)(access:[Access](access "haxe.macro.Access - Represents an access modifier.")):[String](../../string "String - The basic String class.")`
### `[printBinop](#printBinop)(op:[Binop](binop "haxe.macro.Binop - A binary operator.")):[String](../../string "String - The basic String class.")`
### `[printComplexType](#printComplexType)(ct:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")):[String](../../string "String - The basic String class.")`
### `[printConstant](#printConstant)(c:[Constant](constant "haxe.macro.Constant - Represents a constant.")):[String](../../string "String - The basic String class.")`
### `[printExpr](#printExpr)(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")):[String](../../string "String - The basic String class.")`
### `[printExprWithPositions](#printExprWithPositions)(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")):[String](../../string "String - The basic String class.")`
### `[printExprs](#printExprs)(el:[Array](../../array "Array")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>, sep:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
### `[printField](#printField)(field:[Field](field "haxe.macro.Field - Represents a field in the AST.")):[String](../../string "String - The basic String class.")`
### `[printFormatString](#printFormatString)(s:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
### `[printFunction](#printFunction)(func:[Function](function "haxe.macro.Function - Represents a function in the AST."), ?kind:[FunctionKind](functionkind "haxe.macro.FunctionKind")):[String](../../string "String - The basic String class.")`
### `[printFunctionArg](#printFunctionArg)(arg:[FunctionArg](functionarg "haxe.macro.FunctionArg - Represents a function argument in the AST.")):[String](../../string "String - The basic String class.")`
### `[printMetadata](#printMetadata)(meta:[MetadataEntry](metadataentry "haxe.macro.MetadataEntry - Represents a metadata entry in the AST.")):[String](../../string "String - The basic String class.")`
### `[printObjectField](#printObjectField)(of:[ObjectField](objectfield "haxe.macro.ObjectField - Represents the field of an object declaration.")):[String](../../string "String - The basic String class.")`
### `[printObjectFieldKey](#printObjectFieldKey)(of:[ObjectField](objectfield "haxe.macro.ObjectField - Represents the field of an object declaration.")):[String](../../string "String - The basic String class.")`
### `[printString](#printString)(s:[String](../../string "String - The basic String class.")):[String](../../string "String - The basic String class.")`
### `[printTypeDefinition](#printTypeDefinition)(t:[TypeDefinition](typedefinition "haxe.macro.TypeDefinition - Represents a type definition."), printPackage:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true):[String](../../string "String - The basic String class.")`
### `[printTypeParam](#printTypeParam)(param:[TypeParam](typeparam "haxe.macro.TypeParam - Represents a concrete type parameter in the AST.")):[String](../../string "String - The basic String class.")`
### `[printTypeParamDecl](#printTypeParamDecl)(tpd:[TypeParamDecl](typeparamdecl "haxe.macro.TypeParamDecl - Represents a type parameter declaration in the AST.")):[String](../../string "String - The basic String class.")`
### `[printTypePath](#printTypePath)(tp:[TypePath](typepath "haxe.macro.TypePath - Represents a type path in the AST.")):[String](../../string "String - The basic String class.")`
### `[printUnop](#printUnop)(op:[Unop](unop "haxe.macro.Unop - A unary operator.")):[String](../../string "String - The basic String class.")`
### `[printVar](#printVar)(v:[Var](var "haxe.macro.Var - Represents a variable in the AST.")):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Printer.htmlRef<T>
=======
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents a reference to internal compiler structure. It exists to avoid expensive encoding if it is not required and to ensure that physical equality remains intact.
A structure is only encoded when user requests it through `ref.get()`.
Fields
------
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
### `[get](#get)():T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Ref.htmlStringLiteralKind
==================
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Values
------
### `DoubleQuotes`
### `SingleQuotes`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/StringLiteralKind.htmlTConstant
==========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents typed constant.
Values
------
### `TInt(i:[Int](../../int "Int - The standard Int type."))`
An `[Int](../../int)` literal.
### `TFloat(s:[String](../../string "String - The basic String class."))`
A `[Float](../../float)` literal, represented as String to avoid precision loss.
### `TString(s:[String](../../string "String - The basic String class."))`
A `[String](../../string)` literal.
### `TBool(b:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."))`
A `[Bool](../../bool)` literal.
### `TNull`
The constant `null`.
### `TThis`
The constant `this`.
### `TSuper`
The constant `super`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TConstant.htmlTFunc
======
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents a function in the typed AST.
Fields
------
### `[t](#t):[Type](type "haxe.macro.Type - Represents a type.")`
The return type of the function.### `[expr](#expr):[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")`
The expression of the function body.### `[args](#args):[Array](../../array "Array")<{value:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>, v:[TVar](tvar "haxe.macro.TVar - Represents a variable in the typed AST.")}>`
A list of function arguments identified by an argument variable `v` and an optional initialization `value`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TFunc.htmlTVar
=====
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents a variable in the typed AST.
Fields
------
### `read only[t](#t):[Type](type "haxe.macro.Type - Represents a type.")`
The type of the variable.### `read only[name](#name):[String](../../string "String - The basic String class.")`
The name of the variable.### `read only[meta](#meta):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[MetaAccess](metaaccess "haxe.macro.MetaAccess - MetaAccess is a wrapper for the Metadata array.")>`
The metadata of the variable.### `read only[id](#id):[Int](../../int "Int - The standard Int type.")`
The unique ID of the variable.### `read only[extra](#extra):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<{params:[Array](../../array "Array")<[TypeParameter](typeparameter "haxe.macro.TypeParameter - Represents the declaration of type parameters.")>, expr:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>}>`
Special information which is internally used to keep track of closure. information### `read only[capture](#capture):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the variable has been captured by a closure.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TVar.htmlType
=====
package [haxe.macro](index)
*Available on all platforms*
Represents a type.
Values
------
### `TMono(t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Type](type "haxe.macro.Type - Represents a type.")>>)`
Represents a monomorph.
See also:
* <https://haxe.org/manual/types-monomorph.html### `TEnum(t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[EnumType](enumtype "haxe.macro.EnumType - Represents an enum type.")>, params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>)`
Represents an enum instance.
See also:
* <https://haxe.org/manual/types-enum-instance.html### `TInst(t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>, params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>)`
Represents a class instance.
See also:
* <https://haxe.org/manual/types-class-instance.html### `TType(t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[DefType](deftype "haxe.macro.DefType - Represents a typedef.")>, params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>)`
Represents a typedef.
See also:
* <https://haxe.org/manual/type-system-typedef.html### `TFun(args:[Array](../../array "Array")<{t:[Type](type "haxe.macro.Type - Represents a type."), opt:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."), name:[String](../../string "String - The basic String class.")}>, ret:[Type](type "haxe.macro.Type - Represents a type."))`
Represents a function type.
See also:
* <https://haxe.org/manual/types-function.html### `TAnonymous(a:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[AnonType](anontype "haxe.macro.AnonType - Represents information for anonymous structure types.")>)`
Represents an anonymous structure type.
See also:
* <https://haxe.org/manual/types-anonymous-structure.html### `TDynamic(t:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Type](type "haxe.macro.Type - Represents a type.")>)`
Represents Dynamic.
See also:
* <https://haxe.org/manual/types-dynamic.html### `TLazy(f:() ‑> [Type](type "haxe.macro.Type - Represents a type."))`
Used internally by the compiler to delay some typing.
### `TAbstract(t:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[AbstractType](abstracttype "haxe.macro.AbstractType - Represents an abstract type.")>, params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>)`
Represents an abstract type.
See also:
* <https://haxe.org/manual/types-abstract.html © 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Type.htmlTypeDefKind
============
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a type definition kind.
Values
------
### `TDEnum`
Represents an enum kind.
### `TDStructure`
Represents a structure kind.
### `TDClass(superClass:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypePath](typepath "haxe.macro.TypePath - Represents a type path in the AST.")>, interfaces:[Array](../../array "Array")<[TypePath](typepath "haxe.macro.TypePath - Represents a type path in the AST.")>, isInterface:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."), isFinal:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."))`
Represents a class kind.
### `TDAlias(t:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST."))`
Represents an alias/typedef kind.
### `TDAbstract(tthis:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>, from:[Array](../../array "Array")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>, to:[Array](../../array "Array")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>)`
Represents an abstract kind.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TypeDefKind.htmlTypeDefinition
===============
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a type definition.
Fields
------
### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position to the type definition.### `optional[params](#params):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[TypeParamDecl](typeparamdecl "haxe.macro.TypeParamDecl - Represents a type parameter declaration in the AST.")>>`
The parameter type declarations of the type definition.### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
The package of the type definition.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the type definition.### `optional[meta](#meta):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Metadata](metadata "haxe.macro.Metadata - Represents metadata in the AST.")>`
The optional metadata of the type definition.### `[kind](#kind):[TypeDefKind](typedefkind "haxe.macro.TypeDefKind - Represents a type definition kind.")`
The kind of the type definition.### `optional[isExtern](#isExtern):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
Whether or not the type is extern.### `[fields](#fields):[Array](../../array "Array")<[Field](field "haxe.macro.Field - Represents a field in the AST.")>`
The fields of the type definition.### `optional[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The documentation of the type, if available. If the type has no documentation, the value is `null`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TypeDefinition.htmlTypeParam
==========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a concrete type parameter in the AST.
Haxe allows expressions in concrete type parameters, e.g. `new YourType<["hello", "world"]>`. In that case the value is `TPExpr` while in the normal case it's `TPType`.
Values
------
### `TPType(t:[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST."))`
### `TPExpr(e:[Expr](expr "haxe.macro.Expr - Represents a node in the AST."))`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TypeParam.htmlTypeParamDecl
==============
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a type parameter declaration in the AST.
Fields
------
### `optional[params](#params):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[TypeParamDecl](typeparamdecl "haxe.macro.TypeParamDecl - Represents a type parameter declaration in the AST.")>>`
The optional parameters of the type parameter.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the type parameter.### `optional[meta](#meta):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Metadata](metadata "haxe.macro.Metadata - Represents metadata in the AST.")>`
The metadata of the type parameter.### `optional[constraints](#constraints):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>>`
The optional constraints of the type parameter.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TypeParamDecl.htmlTypeParameter
==============
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents the declaration of type parameters.
Fields
------
### `[t](#t):[Type](type "haxe.macro.Type - Represents a type.")`
The type of the type parameter. It is guaranteed to be a `TInst` with a `KTypeParameter` kind.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the type parameter.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TypeParameter.htmlTypePath
=========
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a type path in the AST.
Fields
------
### `optional[sub](#sub):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
Sub is set on module sub-type access: `pack.Module.Type` has `name = "Module"`, `sub = "Type"`, if available.### `optional[params](#params):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[TypeParam](typeparam "haxe.macro.TypeParam - Represents a concrete type parameter in the AST.")>>`
Optional parameters of the type path.### `[pack](#pack):[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
Represents the package of the type path.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the type path.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TypePath.htmlTypeTools
==========
package [haxe.macro](index)
*Available on all platforms*
This class provides some utility methods to work with types. It is best used through 'using haxe.macro.TypeTools' syntax and then provides additional methods on haxe.macro.Type instances.
Static methods
--------------
### `static[applyTypeParameters](#applyTypeParameters)(t:[Type](type "haxe.macro.Type - Represents a type."), typeParameters:[Array](../../array "Array")<[TypeParameter](typeparameter "haxe.macro.TypeParameter - Represents the declaration of type parameters.")>, concreteTypes:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>):[Type](type "haxe.macro.Type - Represents a type.")`
*Available on macro*
Applies the type parameters `typeParameters` to type `t` with the given types `concreteTypes`.
This function replaces occurrences of type parameters in `t` if they are part of `typeParameters`. The array index of such a type parameter is then used to lookup the concrete type in `concreteTypes`.
If `typeParameters.length` is not equal to `concreteTypes.length`, an exception of type `[String](../../string)` is thrown.
If `typeParameters.length` is 0, `t` is returned unchanged.
If either argument is `null`, the result is unspecified.
### `static[findField](#findField)(c:[ClassType](classtype "haxe.macro.ClassType - Represents a class type."), name:[String](../../string "String - The basic String class."), isStatic:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = false):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ClassField](classfield "haxe.macro.ClassField - Represents a class field.")>`
Resolves the field named `name` on class `c`.
If `isStatic` is true, the classes' static fields are checked. Otherwise the classes' member fields are checked.
If the field is found, it is returned. Otherwise if `c` has a super class, `findField` recursively checks that super class. Otherwise null is returned.
If any argument is null, the result is unspecified.
### `staticinline[follow](#follow)(t:[Type](type "haxe.macro.Type - Represents a type."), ?once:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")):[Type](type "haxe.macro.Type - Represents a type.")`
*Available on macro*
Follows all typedefs of `t` to reach the actual type.
If `once` is true, this function does not call itself recursively, otherwise it does. This can be useful in cases where intermediate typedefs might be of interest.
Affected types are monomorphs `TMono` and typedefs `TType(t,pl)`.
If `t` is null, an internal exception is thrown.
Usage example:
```
var t = Context.typeof(macro null); // TMono(<mono>)
var ts = Context.typeof(macro "foo"); //TInst(String,[])
Context.unify(t, ts);
trace(t); // TMono(<mono>)
trace(t.follow()); //TInst(String,[])
```
### `staticinline[followWithAbstracts](#followWithAbstracts)(t:[Type](type "haxe.macro.Type - Represents a type."), once:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = false):[Type](type "haxe.macro.Type - Represents a type.")`
*Available on macro*
Like `follow`, follows all typedefs of `t` to reach the actual type.
Will however follow also abstracts to their underlying implementation, if they are not a @:coreType abstract
If `t` is null, an internal exception is thrown.
Usage example:
```
var t = Context.typeof(macro new Map<String, String>());
trace(t); // TAbstract(Map,[TInst(String,[]),TInst(String,[])])
trace(t.followWithAbstracts()); // TInst(haxe.ds.StringMap, [TInst(String,[])])
```
### `static[getClass](#getClass)(t:[Type](type "haxe.macro.Type - Represents a type.")):[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")`
*Available on macro*
Tries to extract the class instance stored inside `t`.
If `t` is a class instance `TInst(c,pl)`, c is returned.
If `t` is of a different type, an exception of type String is thrown.
If `t` is null, the result is null.
### `static[getEnum](#getEnum)(t:[Type](type "haxe.macro.Type - Represents a type.")):[EnumType](enumtype "haxe.macro.EnumType - Represents an enum type.")`
*Available on macro*
Tries to extract the enum instance stored inside `t`.
If `t` is an enum instance `TEnum(e,pl)`, e is returned.
If `t` is of a different type, an exception of type String is thrown.
If `t` is null, the result is null.
### `static[iter](#iter)(t:[Type](type "haxe.macro.Type - Represents a type."), f:[Type](type "haxe.macro.Type - Represents a type.") ‑> [Void](../../void "Void - The standard Void type.")):[Void](../../void "Void - The standard Void type.")`
*Available on macro*
Calls function `f` on each component of type `t`.
If `t` is not a compound type, this operation has no effect.
The following types are considered compound:
```
- TInst, TEnum, TType and TAbstract with type parameters
- TFun
- TAnonymous
```
If `t` or `f` are null, the result is unspecified.
### `static[map](#map)(t:[Type](type "haxe.macro.Type - Represents a type."), f:[Type](type "haxe.macro.Type - Represents a type.") ‑> [Type](type "haxe.macro.Type - Represents a type.")):[Type](type "haxe.macro.Type - Represents a type.")`
*Available on macro*
Transforms `t` by calling `f` on each of its subtypes.
If `t` is a compound type, `f` is called on each of its components.
Otherwise `t` is returned unchanged.
The following types are considered compound:
```
- TInst, TEnum, TType and TAbstract with type parameters
- TFun
- TAnonymous
```
If `t` or `f` are null, the result is unspecified.
### `static[toComplexType](#toComplexType)(type:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Type](type "haxe.macro.Type - Represents a type.")>):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>`
Returns a syntax-level type corresponding to Type `t`.
This function is mostly inverse to `[ComplexTypeTools.toType](complextypetools#toType)`, but may lose some information on types that do not have a corresponding syntax version, such as monomorphs. In these cases, the result is null.
If `t` is null, an internal exception is thrown.
### `static[toString](#toString)(t:[Type](type "haxe.macro.Type - Represents a type.")):[String](../../string "String - The basic String class.")`
*Available on macro*
Converts type `t` to a human-readable String representation.
### `staticinline[unify](#unify)(t1:[Type](type "haxe.macro.Type - Represents a type."), t2:[Type](type "haxe.macro.Type - Represents a type.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
*Available on macro*
Returns true if `t1` and `t2` unify, false otherwise.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TypeTools.htmlTypedExpr
==========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents a typed AST node.
Fields
------
### `[t](#t):[Type](type "haxe.macro.Type - Represents a type.")`
The type of the expression.### `[pos](#pos):[Position](position "haxe.macro.Position - Represents a position in a file.")`
The position of the expression.### `[expr](#expr):[TypedExprDef](typedexprdef "haxe.macro.TypedExprDef - Represents kind of a node in the typed AST.")`
The expression kind.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TypedExpr.htmlTypedExprDef
=============
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents kind of a node in the typed AST.
Values
------
### `TConst(c:[TConstant](tconstant "haxe.macro.TConstant - Represents typed constant."))`
A constant.
### `TLocal(v:[TVar](tvar "haxe.macro.TVar - Represents a variable in the typed AST."))`
Reference to a local variable `v`.
### `TArray(e1:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), e2:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."))`
Array access `e1[e2]`.
### `TBinop(op:[Binop](binop "haxe.macro.Binop - A binary operator."), e1:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), e2:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."))`
Binary operator `e1 op e2`.
### `TField(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), fa:[FieldAccess](fieldaccess "haxe.macro.FieldAccess - Represents the kind of field access in the typed AST."))`
Field access on `e` according to `fa`.
### `TTypeExpr(m:[ModuleType](moduletype "haxe.macro.ModuleType - Represents a module type."))`
Reference to a module type `m`.
### `TParenthesis(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."))`
Parentheses `(e)`.
### `TObjectDecl(fields:[Array](../../array "Array")<{name:[String](../../string "String - The basic String class."), expr:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")}>)`
An object declaration.
### `TArrayDecl(el:[Array](../../array "Array")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>)`
An array declaration `[el]`.
### `TCall(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), el:[Array](../../array "Array")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>)`
A call `e(el)`.
### `TNew(c:[Ref](ref "haxe.macro.Ref - Represents a reference to internal compiler structure.")<[ClassType](classtype "haxe.macro.ClassType - Represents a class type.")>, params:[Array](../../array "Array")<[Type](type "haxe.macro.Type - Represents a type.")>, el:[Array](../../array "Array")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>)`
A constructor call `new c<params>(el)`.
### `TUnop(op:[Unop](unop "haxe.macro.Unop - A unary operator."), postFix:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."), e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."))`
An unary operator `op` on `e`:
e++ (op = OpIncrement, postFix = true) e-- (op = OpDecrement, postFix = true) ++e (op = OpIncrement, postFix = false) --e (op = OpDecrement, postFix = false) -e (op = OpNeg, postFix = false) !e (op = OpNot, postFix = false) ~e (op = OpNegBits, postFix = false)
### `TFunction(tfunc:[TFunc](tfunc "haxe.macro.TFunc - Represents a function in the typed AST."))`
A function declaration.
### `TVar(v:[TVar](tvar "haxe.macro.TVar - Represents a variable in the typed AST."), expr:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>)`
A variable declaration `var v` or `var v = expr`.
### `TBlock(el:[Array](../../array "Array")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>)`
A block declaration `{el}`.
### `TFor(v:[TVar](tvar "haxe.macro.TVar - Represents a variable in the typed AST."), e1:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), e2:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."))`
A `for` expression.
### `TIf(econd:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), eif:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), eelse:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>)`
An `if(econd) eif` or `if(econd) eif else eelse` expression.
### `TWhile(econd:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), normalWhile:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."))`
Represents a `while` expression. When `normalWhile` is `[true](../../bool)` it is `while (...)`. When `normalWhile` is `[false](../../bool)` it is `do {...} while (...)`.
### `TSwitch(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), cases:[Array](../../array "Array")<{values:[Array](../../array "Array")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>, expr:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")}>, edef:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>)`
Represents a `switch` expression with related cases and an optional `default` case if edef != null.
### `TTry(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), catches:[Array](../../array "Array")<{v:[TVar](tvar "haxe.macro.TVar - Represents a variable in the typed AST."), expr:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")}>)`
Represents a `try`-expression with related catches.
### `TReturn(e:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")>)`
A `return` or `return e` expression.
### `TBreak`
A `break` expression.
### `TContinue`
A `continue` expression.
### `TThrow(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."))`
A `throw e` expression.
### `TCast(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), m:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ModuleType](moduletype "haxe.macro.ModuleType - Represents a module type.")>)`
A `cast e` or `cast (e, m)` expression.
### `TMeta(m:[MetadataEntry](metadataentry "haxe.macro.MetadataEntry - Represents a metadata entry in the AST."), e1:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."))`
A `@m e1` expression.
### `TEnumParameter(e1:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), ef:[EnumField](enumfield "haxe.macro.EnumField - Represents an enum constructor."), index:[Int](../../int "Int - The standard Int type."))`
Access to an enum parameter (generated by the pattern matcher).
### `TEnumIndex(e1:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."))`
Access to an enum index (generated by the pattern matcher).
### `TIdent(s:[String](../../string "String - The basic String class."))`
An unknown identifier.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TypedExprDef.htmlTypedExprTools
===============
package [haxe.macro](index)
*Available on all platforms*
This class provides some utility methods to work with typed expressions. It is best used through 'using haxe.macro.TypedExprTools' syntax and then provides additional methods on `[haxe.macro.TypedExpr](typedexpr#TypedExpr)` instances.
Static methods
--------------
### `static[iter](#iter)(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), f:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.") ‑> [Void](../../void "Void - The standard Void type.")):[Void](../../void "Void - The standard Void type.")`
Calls function `f` on each sub-expression of `e`.
See `[haxe.macro.ExprTools.iter](exprtools#iter)` for details on iterating expressions in general. This function works the same way, but with a different data structure.
### `static[map](#map)(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), f:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.") ‑> [TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")):[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")`
Transforms the sub-expressions of `e` by calling `f` on each of them.
See `[haxe.macro.ExprTools.map](exprtools#map)` for details on expression mapping in general. This function works the same way, but with a different data structure.
### `static[mapWithType](#mapWithType)(e:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), f:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.") ‑> [TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), ft:[Type](type "haxe.macro.Type - Represents a type.") ‑> [Type](type "haxe.macro.Type - Represents a type."), fv:[TVar](tvar "haxe.macro.TVar - Represents a variable in the typed AST.") ‑> [TVar](tvar "haxe.macro.TVar - Represents a variable in the typed AST.")):[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node.")`
Transforms the sub-expressions of `e` by calling `f` on each of them. Additionally, types are mapped using `ft` and variables are mapped using `fv`.
See `[haxe.macro.ExprTools.map](exprtools#map)` for details on expression mapping in general. This function works the same way, but with a different data structure.
### `static[toString](#toString)(t:[TypedExpr](typedexpr "haxe.macro.TypedExpr - Represents a typed AST node."), pretty:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = false):[String](../../string "String - The basic String class.")`
*Available on macro*
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/TypedExprTools.htmlCType
======
package [haxe.rtti](index)
*Available on all platforms*
The runtime member types.
Values
------
### `CUnknown`
### `CEnum(name:[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type."), params:[Array](../../array "Array")<[CType](ctype "haxe.rtti.CType - The runtime member types.")>)`
### `CClass(name:[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type."), params:[Array](../../array "Array")<[CType](ctype "haxe.rtti.CType - The runtime member types.")>)`
### `CTypedef(name:[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type."), params:[Array](../../array "Array")<[CType](ctype "haxe.rtti.CType - The runtime member types.")>)`
### `CFunction(args:[Array](../../array "Array")<[FunctionArgument](functionargument "haxe.rtti.FunctionArgument - The function argument runtime type information.")>, ret:[CType](ctype "haxe.rtti.CType - The runtime member types."))`
### `CAnonymous(fields:[Array](../../array "Array")<[ClassField](classfield "haxe.rtti.ClassField - The runtime class field information.")>)`
### `CDynamic(t:[CType](ctype "haxe.rtti.CType - The runtime member types."))`
### `CAbstract(name:[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type."), params:[Array](../../array "Array")<[CType](ctype "haxe.rtti.CType - The runtime member types.")>)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/CType.htmlUnop
=====
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
A unary operator.
See also:
* <https://haxe.org/manual/types-numeric-operators.htmlValues
------
### `OpIncrement`
### `++`
### `OpDecrement`
### `--`
### `OpNot`
### `!`
### `OpNeg`
### `-`
### `OpNegBits`
### `~`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Unop.htmlVar
====
package [haxe.macro](index)
import [haxe.macro.Expr](expr)
*Available on all platforms*
Represents a variable in the AST.
See also:
* <https://haxe.org/manual/expression-var.htmlFields
------
### `[type](#type):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[ComplexType](complextype "haxe.macro.ComplexType - Represents a type syntax in the AST.")>`
The type-hint of the variable, if available.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the variable.### `optional[isFinal](#isFinal):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")>`
Whether or not the variable can be assigned to.### `[expr](#expr):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Expr](expr "haxe.macro.Expr - Represents a node in the AST.")>`
The expression of the variable, if available.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/Var.htmlVarAccess
==========
package [haxe.macro](index)
import [haxe.macro.Type](type)
*Available on all platforms*
Represents the variable accessor.
Values
------
### `AccNormal`
Normal access (`default`).
### `AccNo`
Private access (`null`).
### `AccNever`
No access (`never`).
### `AccResolve`
Unused.
### `AccCall`
Access through accessor function (`get`, `set`, `dynamic`).
### `AccInline`
Inline access (`inline`).
### `AccRequire(r:[String](../../string "String - The basic String class."), msg:[String](../../string "String - The basic String class."))`
Failed access due to a `@:require` metadata.
### `AccCtor`
Access is only allowed from the constructor.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/macro/VarAccess.htmlAbstractdef
============
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The abstract type runtime information.
See also:
* <https://haxe.org/manual/cr-rtti-structure.html#abstract-type-informationFields
------
### `[to](#to):[Array](../../array "Array")<{t:[CType](ctype "haxe.rtti.CType - The runtime member types."), field:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>}>`
### `[platforms](#platforms):[Platforms](platforms "haxe.rtti.Platforms - A list of strings representing the targets where the type is available.")`
A list of strings representing the targets where the type is available.### `[path](#path):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The type path of the type.### `[params](#params):[TypeParams](typeparams "haxe.rtti.TypeParams - An array of strings representing the names of the type parameters the type has.")`
An array of strings representing the names of the type parameters the type has.### `[module](#module):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The type path of the module containing the type.### `[meta](#meta):[MetaData](metadata "haxe.rtti.MetaData - The list of runtime metadata.")`
The [metadata](https://haxe.org/manual/lf-metadata.html) the type was annotated with.### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is [`private`](https://haxe.org/manual/type-system-module-sub-types.html#define-private-type).### `[impl](#impl):[Classdef](classdef "haxe.rtti.Classdef - The runtime class definition information.")`
### `[from](#from):[Array](../../array "Array")<{t:[CType](ctype "haxe.rtti.CType - The runtime member types."), field:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>}>`
### `[file](#file):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The full slash path of the .hx file containing the type. This might be `null` in case there is no such file, e.g. if the type is defined through a macro.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The documentation of the type. This information is only available if the compiler flag `-D use_rtti_doc` was in place. Otherwise, or if the constructor has no documentation, the value is `null`.### `[athis](#athis):[CType](ctype "haxe.rtti.CType - The runtime member types.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/Abstractdef.htmlCTypeTools
===========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The `[CTypeTools](ctypetools#CTypeTools)` class contains some extra functionalities for handling `[CType](ctype#CType)` instances.
Static methods
--------------
### `static[toString](#toString)(t:[CType](ctype "haxe.rtti.CType - The runtime member types.")):[String](../../string "String - The basic String class.")`
Get the string representation of `[CType](ctype#CType)`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/CTypeTools.htmlClassField
===========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The runtime class field information.
See also:
* <https://haxe.org/manual/cr-rtti-structure.html#class-field-informationFields
------
### `[type](#type):[CType](ctype "haxe.rtti.CType - The runtime member types.")`
The type of the field.### `[set](#set):[Rights](rights "haxe.rtti.Rights - Represents the runtime rights of a type.")`
The [write access](https://haxe.org/manual/class-field-property.html#define-write-access) behavior of the field.### `[platforms](#platforms):[Platforms](platforms "haxe.rtti.Platforms - A list of strings representing the targets where the type is available.")`
A list of strings representing the targets where the field is available.### `[params](#params):[TypeParams](typeparams "haxe.rtti.TypeParams - An array of strings representing the names of the type parameters the type has.")`
An array of strings representing the names of the type parameters the field has.### `[overloads](#overloads):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<[ClassField](classfield "haxe.rtti.ClassField - The runtime class field information.")>>`
The list of available overloads for the fields or `null` if no overloads exists.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the field.### `[meta](#meta):[MetaData](metadata "haxe.rtti.MetaData - The list of runtime metadata.")`
The meta data the field was annotated with.### `[line](#line):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Int](../../int "Int - The standard Int type.")>`
The line number where the field is defined. This information is only available if the field has an expression. Otherwise the value is `null`.### `[isPublic](#isPublic):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the field is `public`.### `[isOverride](#isOverride):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the field overrides another field.### `[isFinal](#isFinal):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the field is `final`.### `[get](#get):[Rights](rights "haxe.rtti.Rights - Represents the runtime rights of a type.")`
The [read access](https://haxe.org/manual/class-field-property.html#define-read-access) behavior of the field.### `[expr](#expr):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The actual expression of the field or `null` if there is no expression.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The documentation of the field. This information is only available if the compiler flag `-D use_rtti_doc` was in place. Otherwise, or if the field has no documentation, the value is `null`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/ClassField.htmlClassdef
=========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The runtime class definition information.
Fields
------
### `[tdynamic](#tdynamic):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[CType](ctype "haxe.rtti.CType - The runtime member types.")>`
The type which is dynamically implemented by the class or `null` if no such type exists.### `[superClass](#superClass):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[PathParams](pathparams "haxe.rtti.PathParams - The type parameters in the runtime type information.")>`
The class' parent class defined by its type path and list of type parameters.### `[statics](#statics):[Array](../../array "Array")<[ClassField](classfield "haxe.rtti.ClassField - The runtime class field information.")>`
The list of static class fields.### `[platforms](#platforms):[Platforms](platforms "haxe.rtti.Platforms - A list of strings representing the targets where the type is available.")`
A list of strings representing the targets where the type is available.### `[path](#path):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The type path of the type.### `[params](#params):[TypeParams](typeparams "haxe.rtti.TypeParams - An array of strings representing the names of the type parameters the type has.")`
An array of strings representing the names of the type parameters the type has.### `[module](#module):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The type path of the module containing the type.### `[meta](#meta):[MetaData](metadata "haxe.rtti.MetaData - The list of runtime metadata.")`
The [metadata](https://haxe.org/manual/lf-metadata.html) the type was annotated with.### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is [`private`](https://haxe.org/manual/type-system-module-sub-types.html#define-private-type).### `[isInterface](#isInterface):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the class is actually an [interface](https://haxe.org/manual/types-interfaces.html).### `[isFinal](#isFinal):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the class is `final`.### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the class is [extern](https://haxe.org/manual/lf-externs.html).### `[interfaces](#interfaces):[Array](../../array "Array")<[PathParams](pathparams "haxe.rtti.PathParams - The type parameters in the runtime type information.")>`
The list of interfaces defined by their type path and list of type parameters.### `[file](#file):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The full slash path of the .hx file containing the type. This might be `null` in case there is no such file, e.g. if the type is defined through a macro.### `[fields](#fields):[Array](../../array "Array")<[ClassField](classfield "haxe.rtti.ClassField - The runtime class field information.")>`
The list of member [class fields](https://haxe.org/manual/class-field.html).### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The documentation of the type. This information is only available if the compiler flag `-D use_rtti_doc` was in place. Otherwise, or if the constructor has no documentation, the value is `null`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/Classdef.htmlEnumField
==========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The runtime enum constructor information.
See also:
* <https://haxe.org/manual/cr-rtti-structure.html#enum-constructor-informationFields
------
### `[platforms](#platforms):[Platforms](platforms "haxe.rtti.Platforms - A list of strings representing the targets where the type is available.")`
A list of strings representing the targets where the constructor is available.### `[name](#name):[String](../../string "String - The basic String class.")`
The name of the constructor.### `[meta](#meta):[MetaData](metadata "haxe.rtti.MetaData - The list of runtime metadata.")`
The meta data the constructor was annotated with.### `[doc](#doc):[String](../../string "String - The basic String class.")`
The documentation of the constructor. This information is only available if the compiler flag `-D use_rtti_doc` was in place. Otherwise, or if the constructor has no documentation, the value is `null`.### `[args](#args):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Array](../../array "Array")<{t:[CType](ctype "haxe.rtti.CType - The runtime member types."), opt:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."), name:[String](../../string "String - The basic String class.")}>>`
The list of arguments the constructor has or `null` if no arguments are available.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/EnumField.htmlEnumdef
========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The enum runtime type information.
See also:
* <https://haxe.org/manual/cr-rtti-structure.html#enum-type-informationFields
------
### `[platforms](#platforms):[Platforms](platforms "haxe.rtti.Platforms - A list of strings representing the targets where the type is available.")`
A list of strings representing the targets where the type is available.### `[path](#path):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The type path of the type.### `[params](#params):[TypeParams](typeparams "haxe.rtti.TypeParams - An array of strings representing the names of the type parameters the type has.")`
An array of strings representing the names of the type parameters the type has.### `[module](#module):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The type path of the module containing the type.### `[meta](#meta):[MetaData](metadata "haxe.rtti.MetaData - The list of runtime metadata.")`
The [metadata](https://haxe.org/manual/lf-metadata.html) the type was annotated with.### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is [`private`](https://haxe.org/manual/type-system-module-sub-types.html#define-private-type).### `[isExtern](#isExtern):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the enum is [extern](https://haxe.org/manual/lf-externs.html).### `[file](#file):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The full slash path of the .hx file containing the type. This might be `null` in case there is no such file, e.g. if the type is defined through a macro.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The documentation of the type. This information is only available if the compiler flag `-D use_rtti_doc` was in place. Otherwise, or if the constructor has no documentation, the value is `null`.### `[constructors](#constructors):[Array](../../array "Array")<[EnumField](enumfield "haxe.rtti.EnumField - The runtime enum constructor information.")>`
The list of enum constructors.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/Enumdef.htmlFunctionArgument
=================
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The function argument runtime type information.
Fields
------
### `optional[value](#value):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
### `[t](#t):[CType](ctype "haxe.rtti.CType - The runtime member types.")`
### `[opt](#opt):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
### `[name](#name):[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/FunctionArgument.htmlMeta
=====
package [haxe.rtti](index)
*Available on all platforms*
An API to access classes and enums metadata at runtime.
See also:
* <https://haxe.org/manual/cr-rtti.htmlStatic methods
--------------
### `static[getFields](#getFields)(t:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")<[Array](../../array "Array")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>>`
Returns the metadata that were declared for the given class fields or enum constructors
### `static[getStatics](#getStatics)(t:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")<[Array](../../array "Array")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>>`
Returns the metadata that were declared for the given class static fields
### `static[getType](#getType)(t:[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")):[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")<[Array](../../array "Array")<[Dynamic](../../dynamic "Dynamic - Dynamic is a special type which is compatible with all other types.")>>`
Returns the metadata that were declared for the given type (class or enum)
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/Meta.htmlMetaData
=========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The list of runtime metadata.
Alias
-----
*alias for* `[Array](../../array "Array")<{params:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>, name:[String](../../string "String - The basic String class.")}>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/MetaData.htmlPath
=====
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The (dot-)path of the runtime type.
Alias
-----
*alias for* `[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/Path.htmlPathParams
===========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The type parameters in the runtime type information.
Fields
------
### `[path](#path):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The path of the type.### `[params](#params):[Array](../../array "Array")<[CType](ctype "haxe.rtti.CType - The runtime member types.")>`
The array of parameters types.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/PathParams.htmlPlatforms
==========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
A list of strings representing the targets where the type is available.
Alias
-----
*alias for* `[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/Platforms.htmlRights
=======
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
Represents the runtime rights of a type.
Values
------
### `RNormal`
### `RNo`
### `RCall(m:[String](../../string "String - The basic String class."))`
### `RMethod`
### `RDynamic`
### `RInline`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/Rights.htmlRtti
=====
package [haxe.rtti](index)
*Available on all platforms*
Rtti is a helper class which supplements the `@:rtti` metadata.
See also:
* <https://haxe.org/manual/cr-rtti.htmlStatic methods
--------------
### `static[getRtti](#getRtti)<T>(c:[Class](../../class "Class - An abstract type that represents a Class.")<T>):[Classdef](classdef "haxe.rtti.Classdef - The runtime class definition information.")`
Returns the `[haxe.rtti.CType.Classdef](ctype#Classdef)` corresponding to class `c`.
If `c` has no runtime type information, e.g. because no `@:rtti` was added, an exception of type `[String](../../string)` is thrown.
If `c` is `null`, the result is unspecified.
### `static[hasRtti](#hasRtti)<T>(c:[Class](../../class "Class - An abstract type that represents a Class.")<T>):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Tells if `c` has runtime type information.
If `c` is `null`, the result is unspecified.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/Rtti.htmlTypeApi
========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
Contains type and equality checks functionalities for RTTI.
Static methods
--------------
### `static[constructorEq](#constructorEq)(c1:[EnumField](enumfield "haxe.rtti.EnumField - The runtime enum constructor information."), c2:[EnumField](enumfield "haxe.rtti.EnumField - The runtime enum constructor information.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Unlike `c1 == c2`, this function performs a deep equality check on the arguments of the enum constructors, if exists.
If `c1` or `c2` are `null`, the result is unspecified.
### `static[fieldEq](#fieldEq)(f1:[ClassField](classfield "haxe.rtti.ClassField - The runtime class field information."), f2:[ClassField](classfield "haxe.rtti.ClassField - The runtime class field information.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Unlike `f1 == f2`, this function performs a deep equality check on the given `[ClassField](classfield#ClassField)` instances.
If `f1` or `f2` are `null`, the result is unspecified.
### `static[isVar](#isVar)(t:[CType](ctype "haxe.rtti.CType - The runtime member types.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Returns `[true](../../bool)` if the given `[CType](ctype#CType)` is a variable or `[false](../../bool)` if it is a function.
### `static[rightsEq](#rightsEq)(r1:[Rights](rights "haxe.rtti.Rights - Represents the runtime rights of a type."), r2:[Rights](rights "haxe.rtti.Rights - Represents the runtime rights of a type.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Unlike `r1 == r2`, this function performs a deep equality check on the given `[Rights](rights#Rights)` instances.
If `r1` or `r2` are `null`, the result is unspecified.
### `static[typeEq](#typeEq)(t1:[CType](ctype "haxe.rtti.CType - The runtime member types."), t2:[CType](ctype "haxe.rtti.CType - The runtime member types.")):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Unlike `t1 == t2`, this function performs a deep equality check on the given `[CType](ctype#CType)` instances.
If `t1` or `t2` are `null`, the result is unspecified.
### `static[typeInfos](#typeInfos)(t:[TypeTree](typetree "haxe.rtti.TypeTree - The tree types of the runtime type.")):[TypeInfos](typeinfos "haxe.rtti.TypeInfos - The general runtime type information.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/TypeApi.htmlTypeInfos
==========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The general runtime type information.
Fields
------
### `[platforms](#platforms):[Platforms](platforms "haxe.rtti.Platforms - A list of strings representing the targets where the type is available.")`
A list of strings representing the targets where the type is available.### `[path](#path):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The type path of the type.### `[params](#params):[TypeParams](typeparams "haxe.rtti.TypeParams - An array of strings representing the names of the type parameters the type has.")`
An array of strings representing the names of the type parameters the type has.### `[module](#module):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The type path of the module containing the type.### `[meta](#meta):[MetaData](metadata "haxe.rtti.MetaData - The list of runtime metadata.")`
The [metadata](https://haxe.org/manual/lf-metadata.html) the type was annotated with.### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is [`private`](https://haxe.org/manual/type-system-module-sub-types.html#define-private-type).### `[file](#file):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The full slash path of the .hx file containing the type. This might be `null` in case there is no such file, e.g. if the type is defined through a macro.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The documentation of the type. This information is only available if the compiler flag `-D use_rtti_doc` was in place. Otherwise, or if the constructor has no documentation, the value is `null`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/TypeInfos.htmlTypeParams
===========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
An array of strings representing the names of the type parameters the type has. As of Haxe 3.2.0, this does not include the constraints.
Alias
-----
*alias for* `[Array](../../array "Array")<[String](../../string "String - The basic String class.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/TypeParams.htmlTypeTree
=========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The tree types of the runtime type.
Values
------
### `TPackage(name:[String](../../string "String - The basic String class."), full:[String](../../string "String - The basic String class."), subs:[Array](../../array "Array")<[TypeTree](typetree "haxe.rtti.TypeTree - The tree types of the runtime type.")>)`
### `TClassdecl(c:[Classdef](classdef "haxe.rtti.Classdef - The runtime class definition information."))`
### `TEnumdecl(e:[Enumdef](enumdef "haxe.rtti.Enumdef - The enum runtime type information."))`
### `TTypedecl(t:[Typedef](typedef "haxe.rtti.Typedef - The typedef runtime information."))`
### `TAbstractdecl(a:[Abstractdef](abstractdef "haxe.rtti.Abstractdef - The abstract type runtime information."))`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/TypeTree.htmlTypeRoot
=========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
Array of `[TypeTree](typetree#TypeTree)`.
Alias
-----
*alias for* `[Array](../../array "Array")<[haxe.rtti.TypeTree](typetree "haxe.rtti.TypeTree - The tree types of the runtime type.")>`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/TypeRoot.htmlTypedef
========
package [haxe.rtti](index)
import [haxe.rtti.CType](ctype)
*Available on all platforms*
The typedef runtime information.
Fields
------
### `[types](#types):[Map](../../map "Map")<[String](../../string "String - The basic String class."), [CType](ctype "haxe.rtti.CType - The runtime member types.")>`
The types of the typedef, by platform.### `[type](#type):[CType](ctype "haxe.rtti.CType - The runtime member types.")`
The type of the typedef.### `[platforms](#platforms):[Platforms](platforms "haxe.rtti.Platforms - A list of strings representing the targets where the type is available.")`
A list of strings representing the targets where the type is available.### `[path](#path):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The type path of the type.### `[params](#params):[TypeParams](typeparams "haxe.rtti.TypeParams - An array of strings representing the names of the type parameters the type has.")`
An array of strings representing the names of the type parameters the type has.### `[module](#module):[Path](path "haxe.rtti.Path - The (dot-)path of the runtime type.")`
The type path of the module containing the type.### `[meta](#meta):[MetaData](metadata "haxe.rtti.MetaData - The list of runtime metadata.")`
The [metadata](https://haxe.org/manual/lf-metadata.html) the type was annotated with.### `[isPrivate](#isPrivate):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
Whether or not the type is [`private`](https://haxe.org/manual/type-system-module-sub-types.html#define-private-type).### `[file](#file):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The full slash path of the .hx file containing the type. This might be `null` in case there is no such file, e.g. if the type is defined through a macro.### `[doc](#doc):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[String](../../string "String - The basic String class.")>`
The documentation of the type. This information is only available if the compiler flag `-D use_rtti_doc` was in place. Otherwise, or if the constructor has no documentation, the value is `null`.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/Typedef.htmlXmlParser
==========
package [haxe.rtti](index)
*Available on all platforms*
XmlParser processes the runtime type information (RTTI) which is stored as a XML string in a static field `__rtti`.
See also:
* <https://haxe.org/manual/cr-rtti.htmlConstructor
-----------
### `[new](#new)()`
Variables
---------
### `[root](#root):[TypeRoot](typeroot "haxe.rtti.TypeRoot - Array of TypeTree.")`
Methods
-------
### `dynamic[newField](#newField)(c:[Classdef](classdef "haxe.rtti.Classdef - The runtime class definition information."), f:[ClassField](classfield "haxe.rtti.ClassField - The runtime class field information.")):[Void](../../void "Void - The standard Void type.")`
### `[process](#process)(x:[Xml](../../xml "Xml - Cross-platform Xml API."), platform:[String](../../string "String - The basic String class.")):[Void](../../void "Void - The standard Void type.")`
### `[processElement](#processElement)(x:[Xml](../../xml "Xml - Cross-platform Xml API.")):[TypeTree](typetree "haxe.rtti.TypeTree - The tree types of the runtime type.")`
### `[sort](#sort)(?l:[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[TypeRoot](typeroot "haxe.rtti.TypeRoot - Array of TypeTree.")>):[Void](../../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/rtti/XmlParser.htmlAccess([Xml](../../xml "Xml - Cross-platform Xml API."))
=========================================================
package [haxe.xml](index)
*Available on all platforms*
The `[haxe.xml.Access](access#Access)` API helps providing a fast dot-syntax access to the most common `[Xml](../../xml)` methods.
Variables
---------
### `read only[att](#att):AttribAccess`
Access to a given attribute.
An exception is thrown if the attribute doesn't exists. Use `has` to check the existence of an attribute.
```
var f = new haxe.xml.Access(Xml.parse("<user name='Mark'></user>"));
var user = f.node.user;
if (user.has.name) {
trace(user.att.name); // Mark
}
```
### `read only[elements](#elements):[Iterator](../../iterator "Iterator - An Iterator is a structure that permits iteration over elements of type T.")<[Access](access "haxe.xml.Access - The haxe.")>`
The list of all sub-elements which are the nodes with type `[Xml.Element](../../xml#Element)`.
### `read only[has](#has):HasAttribAccess`
Check the existence of an attribute with the given name.
### `read only[hasNode](#hasNode):HasNodeAccess`
Check the existence of a sub node with the given name.
```
var f = new haxe.xml.Access(Xml.parse("<user><age>31</age></user>"));
var user = f.node.user;
if (user.hasNode.age) {
trace(user.node.age.innerData); // 31
}
```
### `read only[innerData](#innerData):[String](../../string "String - The basic String class.")`
The inner PCDATA or CDATA of the node.
An exception is thrown if there is no data or if there not only data but also other nodes.
### `read only[innerHTML](#innerHTML):[String](../../string "String - The basic String class.")`
The XML string built with all the sub nodes, excluding the current one.
### `read only[name](#name):[String](../../string "String - The basic String class.")`
The name of the current element. This is the same as `[Xml.nodeName](../../xml#nodeName)`.
### `read only[node](#node):NodeAccess`
Access to the first sub element with the given name.
An exception is thrown if the element doesn't exists. Use `hasNode` to check the existence of a node.
```
var access = new haxe.xml.Access(Xml.parse("<user><name>John</name></user>"));
var user = access.node.user;
var name = user.node.name;
trace(name.innerData); // John
// Uncaught Error: Document is missing element password
var password = user.node.password;
```
### `read only[nodes](#nodes):NodeListAccess`
Access to the List of elements with the given name.
```
var fast = new haxe.xml.Access(Xml.parse("
<users>
<user name='John'/>
<user name='Andy'/>
<user name='Dan'/>
</users>"
));
var users = fast.node.users;
for (user in users.nodes.user) {
trace(user.att.name);
}
```
### `read only[x](#x):[Xml](../../xml "Xml - Cross-platform Xml API.")`
Methods
-------
### `inline[get_x](#get_x)():[Xml](../../xml "Xml - Cross-platform Xml API.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/xml/Access.htmlAttrib
=======
package [haxe.xml](index)
import [haxe.xml.Check](check)
*Available on all platforms*
Values
------
### `Att(name:[String](../../string "String - The basic String class."), filter:[Filter](filter "haxe.xml.Filter"), defvalue:[String](../../string "String - The basic String class."))`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/xml/Attrib.htmlCheck
======
package [haxe.xml](index)
*Available on all platforms*
Static methods
--------------
### `static[checkDocument](#checkDocument)(x:[Xml](../../xml "Xml - Cross-platform Xml API."), r:[Rule](rule "haxe.xml.Rule")):[Void](../../void "Void - The standard Void type.")`
### `static[checkNode](#checkNode)(x:[Xml](../../xml "Xml - Cross-platform Xml API."), r:[Rule](rule "haxe.xml.Rule")):[Void](../../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/xml/Check.htmlFast
=====
package [haxe.xml](index)
**Deprecated:**
*Available on all platforms*
Alias
-----
*alias for* `[haxe.xml.Access](access "haxe.xml.Access - The haxe.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/xml/Fast.htmlFilter
=======
package [haxe.xml](index)
import [haxe.xml.Check](check)
*Available on all platforms*
Values
------
### `FInt`
### `FBool`
### `FEnum(values:[Array](../../array "Array")<[String](../../string "String - The basic String class.")>)`
### `FReg(matcher:[EReg](../../ereg "EReg - The EReg class represents regular expressions."))`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/xml/Filter.htmlParser
=======
package [haxe.xml](index)
*Available on all platforms*
Static methods
--------------
### `static[parse](#parse)(str:[String](../../string "String - The basic String class."), strict:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = false):[Xml](../../xml "Xml - Cross-platform Xml API.")`
Parses the String into an XML Document. Set strict parsing to true in order to enable a strict check of XML attributes and entities.
Throws:
| | |
| --- | --- |
| `null` | haxe.xml.XmlParserException |
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/xml/Parser.htmlPrinter
========
package [haxe.xml](index)
*Available on all platforms*
This class provides utility methods to convert Xml instances to String representation.
Static methods
--------------
### `static[print](#print)(xml:[Xml](../../xml "Xml - Cross-platform Xml API."), pretty:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = false):[String](../../string "String - The basic String class.")`
Convert `[Xml](../../xml)` to string representation.
Set `pretty` to `[true](../../bool)` to prettify the result.
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/xml/Printer.htmlProxy<Const, T>
================
package [haxe.xml](index)
*Available on all platforms*
This proxy can be inherited with an XML file name parameter. It will only allow access to fields which corresponds to an "id" attribute value in the XML file :
```
class MyXml extends haxe.xml.Proxy<"my.xml", MyStructure> {
}
var h = new haxe.ds.StringMap<MyStructure>();
// ... fill h with "my.xml" content
var m = new MyXml(h.get);
trace(m.myNode.structField);
// Access to "myNode" is only possible if you have an id="myNode" attribute
// in your XML, and completion works as well.
```
Constructor
-----------
### `[new](#new)(f:[String](../../string "String - The basic String class.") ‑> T)`
Methods
-------
### `[resolve](#resolve)(k:[String](../../string "String - The basic String class.")):T`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/xml/Proxy.htmlRule
=====
package [haxe.xml](index)
import [haxe.xml.Check](check)
*Available on all platforms*
Values
------
### `RNode(name:[String](../../string "String - The basic String class."), attribs:[Array](../../array "Array")<[Attrib](attrib "haxe.xml.Attrib")>, childs:[Rule](rule "haxe.xml.Rule"))`
### `RData(filter:[Filter](filter "haxe.xml.Filter"))`
### `RMulti(rule:[Rule](rule "haxe.xml.Rule"), atLeastOne:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."))`
### `RList(rules:[Array](../../array "Array")<[Rule](rule "haxe.xml.Rule")>, ordered:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false."))`
### `RChoice(choices:[Array](../../array "Array")<[Rule](rule "haxe.xml.Rule")>)`
### `ROptional(rule:[Rule](rule "haxe.xml.Rule"))`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/xml/Rule.htmlXmlParserException
===================
package [haxe.xml](index)
import [haxe.xml.Parser](parser)
*Available on all platforms*
Constructor
-----------
### `[new](#new)(message:[String](../../string "String - The basic String class."), xml:[String](../../string "String - The basic String class."), position:[Int](../../int "Int - The standard Int type."))`
Variables
---------
### `[lineNumber](#lineNumber):[Int](../../int "Int - The standard Int type.")`
the line number at which the XML parsing error occurred
### `[message](#message):[String](../../string "String - The basic String class.")`
the XML parsing error message
### `[position](#position):[Int](../../int "Int - The standard Int type.")`
the character position in the XML string at which the parsing error occurred
### `[positionAtLine](#positionAtLine):[Int](../../int "Int - The standard Int type.")`
the character position in the reported line at which the parsing error occurred
### `[xml](#xml):[String](../../string "String - The basic String class.")`
the invalid XML string
Methods
-------
### `[toString](#toString)():[String](../../string "String - The basic String class.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/xml/XmlParserException.htmlCompress
=========
package [haxe.zip](index)
*Available on all platforms*
Static methods
--------------
### `static[run](#run)(s:[Bytes](../io/bytes "haxe.io.Bytes"), level:[Int](../../int "Int - The standard Int type.")):[Bytes](../io/bytes "haxe.io.Bytes")`
Constructor
-----------
### `[new](#new)(level:[Int](../../int "Int - The standard Int type."))`
Methods
-------
### `[close](#close)():[Void](../../void "Void - The standard Void type.")`
### `[execute](#execute)(src:[Bytes](../io/bytes "haxe.io.Bytes"), srcPos:[Int](../../int "Int - The standard Int type."), dst:[Bytes](../io/bytes "haxe.io.Bytes"), dstPos:[Int](../../int "Int - The standard Int type.")):{write:[Int](../../int "Int - The standard Int type."), read:[Int](../../int "Int - The standard Int type."), done:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")}`
*Available on cs, php, js, neko, cpp, java, lua, python, hl, flash*
### `[execute](#execute)(src:[Bytes](../io/bytes "haxe.io.Bytes"), srcPos:[Int](../../int "Int - The standard Int type."), dst:[Bytes](../io/bytes "haxe.io.Bytes"), dstPos:[Int](../../int "Int - The standard Int type.")):{wriet:[Int](../../int "Int - The standard Int type."), read:[Int](../../int "Int - The standard Int type."), done:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")}`
*Available on macro*
### `[setFlushMode](#setFlushMode)(f:[FlushMode](flushmode "haxe.zip.FlushMode")):[Void](../../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/Compress.htmlEntry
======
package [haxe.zip](index)
*Available on all platforms*
Fields
------
### `[fileTime](#fileTime):[Date](../../date "Date - The Date class provides a basic structure for date and time related information.")`
### `[fileSize](#fileSize):[Int](../../int "Int - The standard Int type.")`
### `[fileName](#fileName):[String](../../string "String - The basic String class.")`
### `optional[extraFields](#extraFields):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[List](../ds/list "haxe.ds.List - A linked-list of elements.")<[ExtraField](extrafield "haxe.zip.ExtraField")>>`
### `[dataSize](#dataSize):[Int](../../int "Int - The standard Int type.")`
### `[data](#data):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bytes](../io/bytes "haxe.io.Bytes")>`
### `[crc32](#crc32):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Int](../../int "Int - The standard Int type.")>`
### `[compressed](#compressed):[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/Entry.htmlExtraField
===========
package [haxe.zip](index)
import [haxe.zip.Entry](entry)
*Available on all platforms*
Values
------
### `FUnknown(tag:[Int](../../int "Int - The standard Int type."), bytes:[Bytes](../io/bytes "haxe.io.Bytes"))`
### `FInfoZipUnicodePath(name:[String](../../string "String - The basic String class."), crc:[Int](../../int "Int - The standard Int type."))`
### `FUtf8`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/ExtraField.htmlFlushMode
==========
package [haxe.zip](index)
*Available on all platforms*
Values
------
### `NO`
### `SYNC`
### `FULL`
### `FINISH`
### `BLOCK`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/FlushMode.htmlHuffTools
==========
package [haxe.zip](index)
import [haxe.zip.Huffman](huffman)
*Available on all platforms*
Constructor
-----------
### `[new](#new)()`
Methods
-------
### `[make](#make)(lengths:[Array](../../array "Array")<[Int](../../int "Int - The standard Int type.")>, pos:[Int](../../int "Int - The standard Int type."), nlengths:[Int](../../int "Int - The standard Int type."), maxbits:[Int](../../int "Int - The standard Int type.")):[Huffman](huffman "haxe.zip.Huffman")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/HuffTools.htmlHuffman
========
package [haxe.zip](index)
*Available on all platforms*
Values
------
### `Found(i:[Int](../../int "Int - The standard Int type."))`
### `NeedBit(left:[Huffman](huffman "haxe.zip.Huffman"), right:[Huffman](huffman "haxe.zip.Huffman"))`
### `NeedBits(n:[Int](../../int "Int - The standard Int type."), table:[Array](../../array "Array")<[Huffman](huffman "haxe.zip.Huffman")>)`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/Huffman.htmlInflateImpl
============
package [haxe.zip](index)
*Available on all platforms*
A pure Haxe implementation of the ZLIB Inflate algorithm which allows reading compressed data without any platform-specific support.
Static methods
--------------
### `static[run](#run)(i:[Input](../io/input "haxe.io.Input - An Input is an abstract reader."), bufsize:[Int](../../int "Int - The standard Int type.") = 65536):[Bytes](../io/bytes "haxe.io.Bytes")`
Constructor
-----------
### `[new](#new)(i:[Input](../io/input "haxe.io.Input - An Input is an abstract reader."), header:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true, crc:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.") = true)`
Methods
-------
### `[readBytes](#readBytes)(b:[Bytes](../io/bytes "haxe.io.Bytes"), pos:[Int](../../int "Int - The standard Int type."), len:[Int](../../int "Int - The standard Int type.")):[Int](../../int "Int - The standard Int type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/InflateImpl.htmlReader
=======
package [haxe.zip](index)
*Available on all platforms*
Static methods
--------------
### `static[readZip](#readZip)(i:[Input](../io/input "haxe.io.Input - An Input is an abstract reader.")):[List](../ds/list "haxe.ds.List - A linked-list of elements.")<[Entry](entry "haxe.zip.Entry")>`
### `static[unzip](#unzip)(f:[Entry](entry "haxe.zip.Entry")):[Null](../../null "Null - Null<T> is a wrapper that can be used to make the basic types Int, Float and Bool nullable on static targets.")<[Bytes](../io/bytes "haxe.io.Bytes")>`
Constructor
-----------
### `[new](#new)(i:[Input](../io/input "haxe.io.Input - An Input is an abstract reader."))`
Methods
-------
### `[read](#read)():[List](../ds/list "haxe.ds.List - A linked-list of elements.")<[Entry](entry "haxe.zip.Entry")>`
### `[readEntryHeader](#readEntryHeader)():[Entry](entry "haxe.zip.Entry")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/Reader.htmlTools
======
package [haxe.zip](index)
*Available on all platforms*
Static methods
--------------
### `static[compress](#compress)(f:[Entry](entry "haxe.zip.Entry"), level:[Int](../../int "Int - The standard Int type.")):[Void](../../void "Void - The standard Void type.")`
### `static[uncompress](#uncompress)(f:[Entry](entry "haxe.zip.Entry")):[Void](../../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/Tools.htmlUncompress
===========
package [haxe.zip](index)
*Available on all platforms*
Static methods
--------------
### `static[run](#run)(src:[Bytes](../io/bytes "haxe.io.Bytes"), ?bufsize:[Int](../../int "Int - The standard Int type.")):[Bytes](../io/bytes "haxe.io.Bytes")`
Constructor
-----------
### `[new](#new)(?windowBits:[Int](../../int "Int - The standard Int type."))`
Methods
-------
### `[close](#close)():[Void](../../void "Void - The standard Void type.")`
### `[execute](#execute)(src:[Bytes](../io/bytes "haxe.io.Bytes"), srcPos:[Int](../../int "Int - The standard Int type."), dst:[Bytes](../io/bytes "haxe.io.Bytes"), dstPos:[Int](../../int "Int - The standard Int type.")):{write:[Int](../../int "Int - The standard Int type."), read:[Int](../../int "Int - The standard Int type."), done:[Bool](../../bool "Bool - The standard Boolean type, which can either be true or false.")}`
### `[setFlushMode](#setFlushMode)(f:[FlushMode](flushmode "haxe.zip.FlushMode")):[Void](../../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/Uncompress.htmlWriter
=======
package [haxe.zip](index)
*Available on all platforms*
Constructor
-----------
### `[new](#new)(o:[Output](../io/output "haxe.io.Output - An Output is an abstract write."))`
Methods
-------
### `[write](#write)(files:[List](../ds/list "haxe.ds.List - A linked-list of elements.")<[Entry](entry "haxe.zip.Entry")>):[Void](../../void "Void - The standard Void type.")`
### `[writeCDR](#writeCDR)():[Void](../../void "Void - The standard Void type.")`
### `[writeEntryHeader](#writeEntryHeader)(f:[Entry](entry "haxe.zip.Entry")):[Void](../../void "Void - The standard Void type.")`
© 2005–2020 Haxe Foundation
Licensed under a MIT license.
<https://api.haxe.org/haxe/zip/Writer.htmlhaxe |
github.com/revel/samples | go | Go | README
[¶](#section-readme)
---
### Example Revel Applications
[](http://travis-ci.org/revel/examples)
Each directory contains a standalone [Revel](http://revel.github.io/manual) application, eg
狂欢是好的,保存UTF8 。保持微笑,享受..抱怨..请让它好看
```
git clone https://github.com/revel/examples.git $GOPATH/src/github.com/revel/examples revel run github.com/revel/examples/booking
```
* [Booking](https://github.com/revel/samples/blob/v1.1.0/booking.html)
+ A database-driven hotel-booking application, including user management
* [Chat](https://github.com/revel/samples/blob/v1.1.0/chat.html)
+ A chat room demonstrating active refresh, long-polling (comet), and [websocket](http://revel.github.io/manual/websockets.html) implementations.
* [Validation](https://github.com/revel/samples/blob/v1.1.0/validation.html)
+ A demonstration of input [validation](http://revel.github.io/manual/validation.html).
* [Upload](https://github.com/revel/samples/blob/v1.1.0/upload.html)
+ Demonstrates single and multiple file uploads to a webserver via http.
* [Twitter OAuth](https://github.com/revel/samples/blob/v1.1.0/twitter-oauth.html)
+ Display `mentions` and allows `posting` to a Twitter account using OAuth.
* [Facebook OAuth2](https://github.com/revel/samples/blob/v1.1.0/facebook-oauth2.html)
+ Display Facebook user information using OAuth2.
* If you spot a mistake etc, then let us know
* If you have a good sample application idea, then let us know
* If you got an app, then please make a pull request
None |
ICC | cran | R | Package ‘ICC’
October 12, 2022
Type Package
Title Facilitating Estimation of the Intraclass Correlation
Coefficient
Version 2.4.0
URL https://github.com/matthewwolak/ICC
BugReports https://github.com/matthewwolak/ICC/issues
License GPL (>= 2)
LazyLoad yes
NeedsCompilation no
Description Assist in the estimation of the Intraclass Correlation Coefficient
(ICC) from variance components of a one-way analysis of variance and also
estimate the number of individuals or groups necessary to obtain an ICC
estimate with a desired confidence interval width.
Encoding UTF-8
RoxygenNote 7.2.0
Author <NAME> [cre, aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-05-20 07:30:18 UTC
R topics documented:
ICC-packag... 2
effor... 2
ICCbar... 4
Nes... 5
ICC-package Facilitating Estimation of the Intraclass Correlation Coefficient
Description
Assist in the estimation of the Intraclass Correlation Coefficient (ICC) from variance components
of a one-way analysis of variance and also estimate the desired confidence interval width.
Author(s)
Maintainer: <NAME> <<EMAIL>>
See Also
Useful links:
• https://github.com/matthewwolak/ICC
• Report bugs at https://github.com/matthewwolak/ICC/issues
effort Optimum k Measures Based Upon a Fixed Total Researcher Effort
Description
Given a fixed researcher effort (e.g., total number of assays able to be run), this function plots the
optimum k measurements per individual to use in order to obtain the smallest confidence interval at
an expected intraclass correlation coefficient (ICC) estimate. The results are depicted graphically,
showing the tradeoff in confidence interval width with changing k.
Usage
effort(
est.type = c("hypothetical", "pilot"),
e = NULL,
ICC = NULL,
x = NULL,
y = NULL,
data = NULL,
alpha = 0.05
)
Arguments
est.type A character string of either "hypothetical" indicating usage of the given val-
ues of effort (e) and intraclass correlation coefficient (ICC) or if "pilot" is
specified then to calculate these from the dataset provided. Just the first letter
may be used.
e A numeric value indicating the total effort (n individuals times k measurements
per individual). May be a vector of effort levels.
ICC A numeric value of the expected intraclass correlation coefficient.
x Column name of data indicating the individual or group ID from a pilot study.
y Column name of data indicating the measurements from a pilot study.
data A data.frame from a pilot experiment.
alpha A numeric indicating the alpha level to use when estimating the confidence in-
terval.
Details
More than one e may be given. In this case, the graphical result portrays multiple lines - each
representing a different e.
When est.type="pilot", the function automatically generates an effort 10 percent larger and
smaller than the calculated effort from the pilot data.
Author(s)
<<EMAIL>>
See Also
Nest
Examples
#Example 1
effort(est.type = "h", e = c(30, 60, 120), ICC = 0.2)
#Example 2
data(ChickWeight)
effort(est.type = "p", x = Chick, y = weight, data = ChickWeight)
ICCbare Estimate the Intraclass Correlation Coefficient (ICC)
Description
Estimates the ICC and confidence intervals using the variance components from a one-way ANOVA.
Usage
ICCbare(x, y, data = NULL)
ICCbareF(x, y, data = NULL)
ICCest(x, y, data = NULL, alpha = 0.05, CI.type = c("THD", "Smith"))
Arguments
x A column name indicating individual or group id in the dataframe data.
y A column name indicating measurements in the dataframe data.
data A data.frame containing x and y.
alpha A numeric specifying the alpha level to use when estimating the confidence
interval. Default is 0.05.
CI.type A character indicating the particular confidence interval to estimate. Can be
specified by just the first letter of the name. See Details section for more.
Details
ICCbare conducts simple estimation of the ICC that is meant to be as simple as possible and fast for
use in Monte Carlo simulations or bootstrapping. If the design is balanced, ICCbare will calculate
variance components ’by hand’, instead of using the aov function. ICCbare can be used on balanced
or unbalanced datasets with NAs.
ICCbareF is similar to ICCbare, however ICCbareF should not be used with unbalanced datasets.
ICCbareF is distinguished from ICCbare, in that ICCbare is more flexible and can handle missing
values and unbalanced datasets.
If the dependent variable, x, is not a factor, then the function will change it into a factor and produce
a warning message.
For ICCest he confidence interval (CI) can be estimated from one of two methods included here.
CIs of the type "THD" are based upon the exact confidence limit equation in Searle (1971) and can be
used for unbalanced data (see Thomas and Hultquist 1978; Donner 1979). CIs of the type "Smith"
are based upon the approximate formulas for the standard error of the ICC estimate (Smith 1956).
Value
a list:
ICC the intraclass correlation coefficient
LowerCI the lower confidence interval limit, where the confidence level is set by alpha
UpperCI the upper confidence interval limit, where the confidence level is set by alpha
N the total number of individuals or groups used in the analysis
k the number of measurements per individual or group. In an unbalanced design, k is always less
than the mean number of measurements per individual or group and is calculated using the
equation in Lessells and Boag (1987).
varw the within individual or group variance
vara the among individual or group variance
Author(s)
<<EMAIL>>
References
<NAME> and <NAME>. 1987. The Auk, 104(1):116-121.
<NAME>. 1971. Linear Models. New York: Wiley.
<NAME>. and Hultquist, R.A. 1978. Annals of Statistics, 6:582-587.
<NAME>. 1979. American Journal of Epidemiology, 110:335-342.
<NAME>. 1956. Annals of Human Genetics, 21:363-373.
Examples
data(ChickWeight)
# ICCest
ICCest(Chick, weight, data = ChickWeight, CI.type = "S")
Nest Calculate the N required to estimate the ICC with a desired confidence
interval
Description
Given a predicted ICC and k measures per individual/group, this function will calculate the N indi-
viduals/groups required to obtain a desired confidence interval w(Bonett 2002).
Usage
Nest(
est.type = c("hypothetical", "pilot"),
w,
ICC = NULL,
k = NULL,
x = NULL,
y = NULL,
data = NULL,
alpha = 0.05
)
Arguments
est.type A character string of either "hypothetical" indicating usage of the given val-
ues of k and ICC or if "pilot" is specified then to calculate these from the
dataset provided. Just the first letter may be used.
w A numeric of desired width for the confidence interval about the ICC estimate.
ICC The expected intraclass correlation coefficient.
k The number of measurements per individual or group.
x A column name of data indicating the individual or group ID from a pilot study.
y A column name of data indicating the measurements from a pilot study.
data A data.frame from a pilot experiment.
alpha The alpha level to use when estimating the confidence interval.
Details
More than one ICC or k may be given. In this case, the return value is a dataframe with rows
representing the values of the specified ICCs and the columns yield the different k values.
Value
data.frame indicating the N number of individuals or groups to use to estimate the given ICC with
a desired confidence interval width. Rows represent different levels of ICC while columns indicate
different levels of k measurements per individual/group.
Author(s)
<<EMAIL>>
References
<NAME>. 2002. Statistics in Medicine, 21(9): 1331-1335.
<NAME>, <NAME>, <NAME>. 2011. Methods in Ecology and Evolution, 3(1):129-137.
See Also
effort
Examples
# Example 1
n1<-Nest("h", w = 0.14, ICC = 0.1, k = 10)
n1
# Example 2
data(ChickWeight)
Nest("p", w = 0.14, x = Chick, y = weight, data = ChickWeight)
ex2 <- ICCest(Chick, weight, ChickWeight)
ex2$UpperCI - ex2$LowerCI #confidence interval width of pilot study
ex2
# Example 3
Nest("h", w = 0.14, ICC = seq(0.05, 0.15, 0.05), k = seq(10, 12, 1)) |
DataSpaceR | cran | R | Package ‘DataSpaceR’
October 12, 2022
Type Package
Title Interface to 'the CAVD DataSpace'
Version 0.7.6
Description Provides a convenient API interface to access immunological data
within 'the CAVD DataSpace'(<https://dataspace.cavd.org>), a data sharing
and discovery tool that facilitates exploration of HIV immunological data
from pre-clinical and clinical HIV vaccine studies.
URL https://docs.ropensci.org/DataSpaceR/,
https://github.com/ropensci/DataSpaceR
BugReports https://github.com/ropensci/DataSpaceR/issues
License GPL-3
Encoding UTF-8
Imports utils, R6, Rlabkey (>= 2.2.0), curl, httr, assertthat, digest,
jsonlite, data.table
Suggests testthat, covr, knitr, pryr, rmarkdown
VignetteBuilder knitr
RoxygenNote 7.1.1
NeedsCompilation no
Author <NAME> [aut],
<NAME> [rev],
<NAME> [aut, cre],
<NAME> [aut],
CAVD DataSpace [cph]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-06-24 14:20:05 UTC
R topics documented:
DataSpaceR-packag... 2
checkNetr... 2
connectD... 3
DataSpaceConnectio... 4
DataSpaceMa... 7
DataSpaceStud... 9
getNetrcPat... 12
writeNetr... 13
DataSpaceR-package DataSpaceR
Description
DataSpaceR provides a convenient API for accessing datasets within the DataSpace database.
Details
Uses the Rlabkey package to connect to DataSpace. Implements convenient methods for accessing
datasets.
Author(s)
<NAME>
See Also
connectDS
checkNetrc Check netrc file
Description
Check that there is a netrc file with a valid entry for the CAVD DataSpace.
Usage
checkNetrc(netrcFile = getNetrcPath(), onStaging = FALSE, verbose = TRUE)
Arguments
netrcFile A character. File path to netrc file to check.
onStaging A logical. Whether to check the staging server instead of the production server.
verbose A logical. Whether to print the extra details for troubleshooting.
Value
The name of the netrc file
See Also
connectDS writeNetrc
Examples
try(checkNetrc())
connectDS Create a connection to DataSpace
Description
Constructor for DataSpaceConnection
Usage
connectDS(login = NULL, password = NULL, verbose = FALSE, onStaging = FALSE)
Arguments
login A character. Optional argument. If there is no netrc file a temporary one can be
written by passing login and password of an active DataSpace account.
password A character. Optional. The password for the selected login.
verbose A logical. Whether to print the extra details for troubleshooting.
onStaging A logical. Whether to connect to the staging server instead of the production
server.
Details
Instantiates an DataSpaceConnection. The constructor will try to take the values of the various
labkey.* parameters from the global environment. If they don’t exist, it will use default values.
These are assigned to ‘options‘, which are then used by the DataSpaceConnection class.
Value
an instance of DataSpaceConnection
See Also
DataSpaceConnection
Examples
## Not run:
con <- connectDS()
## End(Not run)
con <- try(connectDS())
if (inherits(con, "try-error")) {
warning("Read README for more information on how to set up a .netrc file.")
}
DataSpaceConnection The DataSpaceConnection class
Description
The DataSpaceConnection class
The DataSpaceConnection class
Constructor
connectDS
Active bindings
config A list. Stores configuration of the connection object such as URL, path and username.
availableStudies A data.table. The table of available studies.
availableGroups A data.table. The table of available groups.
availablePublications A data.table. The table of available publications.
mabGridSummary A data.table. The filtered grid with updated n_ columns and geometric_mean_curve_ic50.
mabGrid A data.table. The filtered mAb grid.
virusMetadata A data.table. Metadata about all viruses in the DataSpace.
virusNameMappingTables A list of data.table objects. This list contains ‘virusMetadataAll‘, ‘virus-
LabId‘, and ‘virus_synonym‘ which are described in the vignette ‘Virus_Name_Mapping_Tables‘.
Methods
Public methods:
• DataSpaceConnection$new()
• DataSpaceConnection$print()
• DataSpaceConnection$getStudy()
• DataSpaceConnection$getGroup()
• DataSpaceConnection$filterMabGrid()
• DataSpaceConnection$resetMabGrid()
• DataSpaceConnection$getMab()
• DataSpaceConnection$downloadPublicationData()
• DataSpaceConnection$refresh()
• DataSpaceConnection$clone()
Method new(): Initialize a DataSpaceConnection object. See connectDS.
Usage:
DataSpaceConnection$new(
login = NULL,
password = NULL,
verbose = FALSE,
onStaging = FALSE
)
Arguments:
login A character. Optional argument. If there is no netrc file a temporary one can be written
by passing login and password of an active DataSpace account.
password A character. Optional. The password for the selected login.
verbose A logical. Whether to print the extra details for troubleshooting.
onStaging A logical. Whether to connect to the staging server instead of the production server.
Returns: A new ‘DataSpaceConnection‘ object.
Method print(): Print the DataSpaceConnection object.
Usage:
DataSpaceConnection$print()
Method getStudy(): Create a DataSpaceStudy object.
Usage:
DataSpaceConnection$getStudy(studyName)
Arguments:
studyName A character. Name of the study to retrieve.
Method getGroup(): Create a DataSpaceStudy object.
Usage:
DataSpaceConnection$getGroup(groupId)
Arguments:
groupId An integer. ID of the group to retrieve.
Method filterMabGrid(): Filter rows in the mAb grid by specifying the values to keep in the
columns found in the mabGrid field. It takes the column and the values and filters the underlying
tables.
Usage:
DataSpaceConnection$filterMabGrid(using, value)
Arguments:
using A character. Name of the column to filter.
value A character vector. Values to keep in the mAb grid.
Method resetMabGrid(): Reset the mAb grid to the unfiltered state.
Usage:
DataSpaceConnection$resetMabGrid()
Method getMab(): Create a DataSpaceMab object.
Usage:
DataSpaceConnection$getMab()
Method downloadPublicationData(): Download publication data for a chosen publication.
Usage:
DataSpaceConnection$downloadPublicationData(
publicationId,
outputDir = getwd(),
unzip = TRUE,
verbose = TRUE
)
Arguments:
publicationId A character/integer. ID for the publication to download data for.
outputDir A character. Path to directory to download publication data.
unzip A logical. If TRUE, unzip publication data to outputDir.
verbose A logical. Default TRUE.
Method refresh(): Refresh the connection object to update available studies and groups.
Usage:
DataSpaceConnection$refresh()
Method clone(): The objects of this class are cloneable with this method.
Usage:
DataSpaceConnection$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
See Also
connectDS DataSpaceR-package
Examples
## Not run:
# Create a connection (Initiate a DataSpaceConnection object)
con <- connectDS()
con
# Connect to cvd408
# https://dataspace.cavd.org/cds/CAVD/app.view#learn/learn/Study/cvd408?q=408
cvd408 <- con$getStudy("cvd408")
# Connect to all studies
cvd <- con$getStudy("cvd408")
# Connect to the NYVAC durability comparison group
# https://dataspace.cavd.org/cds/CAVD/app.view#group/groupsummary/220
nyvac <- con$getGroup(220)
# Refresh the connection object to update available studies and groups
con$refresh()
## End(Not run)
DataSpaceMab The DataSpaceMab class
Description
The DataSpaceMab class
The DataSpaceMab class
Constructor
DataSpaceConnection$getMab()
Active bindings
config A list. Stores configuration of the connection object such as URL, path and username.
studyAndMabs A data.table. The table of available mAbs by study.
mabs A data.table. The table of available mAbs and their attributes.
nabMab A data.table. The table of mAbs and their neutralizing measurements against viruses.
studies A data.table. The table of available studies.
assays A data.table. The table of assay status by study.
variableDefinitions A data.table. The table of variable definitions.
Methods
Public methods:
• DataSpaceMab$new()
• DataSpaceMab$print()
• DataSpaceMab$refresh()
• DataSpaceMab$getLanlMetadata()
• DataSpaceMab$clone()
Method new(): Initialize DataSpaceMab object. See DataSpaceConnection.
Usage:
DataSpaceMab$new(mabMixture, filters, config)
Arguments:
mabMixture A character vector.
filters A list.
config A list.
Method print(): Print the DataSpaceMab object summary.
Usage:
DataSpaceMab$print()
Method refresh(): Refresh the DataSpaceMab object to update datasets.
Usage:
DataSpaceMab$refresh()
Method getLanlMetadata(): Applies LANL metadata to mabs table.
Usage:
DataSpaceMab$getLanlMetadata()
Method clone(): The objects of this class are cloneable with this method.
Usage:
DataSpaceMab$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
See Also
connectDS DataSpaceConnection
Examples
## Not run:
# Create a connection (Initiate a DataSpaceConnection object)
con <- connectDS()
# Browse the mAb Grid
con$mabGridSummary
# Filter the grid by viruses
con$filterMabGrid(using = "virus", value = c("242-14", "Q23.17", "6535.3", "BaL.26", "DJ263.8"))
# Filter the grid by donor species (llama)
con$filterMabGrid(using = "donor_species", value = "llama")
# Check the updated grid
con$mabGridSummary
# Retrieve available viruses in the filtered grid
con$mabGrid[, unique(virus)]
# Retrieve available clades for 1H9 mAb mixture in the filtered grid
con$mabGrid[mab_mixture == "1H9", unique(clade)]
# Create a DataSpaceMab object that contains the filtered mAb data
mab <- con$getMab()
mab
# Inspect the `nabMab` field
mab$nabMab
## End(Not run)
DataSpaceStudy The DataSpaceStudy class
Description
The DataSpaceStudy class
The DataSpaceStudy class
Constructor
DataSpaceConnection$getStudy() DataSpaceConnection$getGroup()
Active bindings
study A character. The study name.
config A list. Stores configuration of the connection object such as URL, path and username.
availableDatasets A data.table. The table of datasets available in the DataSpaceStudy object.
cache A list. Stores the data to avoid downloading the same tables multiple times.
dataDir A character. Default directory for storing nonstandard datasets. Set with setDataDir(dataDir).
treatmentArm A data.table. The table of treatment arm information for the connected study. Not
available for all study connection.
group A character. The group name.
studyInfo A list. Stores the information about the study.
Methods
Public methods:
• DataSpaceStudy$new()
• DataSpaceStudy$print()
• DataSpaceStudy$getDataset()
• DataSpaceStudy$clearCache()
• DataSpaceStudy$getDatasetDescription()
• DataSpaceStudy$setDataDir()
• DataSpaceStudy$refresh()
• DataSpaceStudy$clone()
Method new(): Initialize DataSpaceStudy class. See DataSpaceConnection.
Usage:
DataSpaceStudy$new(study = NULL, config = NULL, group = NULL, studyInfo = NULL)
Arguments:
study A character. Name of the study to retrieve.
config A list. Stores configuration of the connection object such as URL, path and username.
group An integer. ID of the group to retrieve.
studyInfo A list. Stores the information about the study.
Method print(): Print DataSpaceStudy class.
Usage:
DataSpaceStudy$print()
Method getDataset(): Get a dataset from the connection.
Usage:
DataSpaceStudy$getDataset(
datasetName,
mergeExtra = FALSE,
colFilter = NULL,
reload = FALSE,
outputDir = NULL,
...
)
Arguments:
datasetName A character. Name of the dataset to retrieve. Accepts the value in either the
"name" or "label" field from availableDatasets.
mergeExtra A logical. If set to TRUE, merge extra information. Ignored for non-integrated
datasets.
colFilter A matrix. A filter as returned by Rlabkey’s makeFilter.
reload A logical. If set to TRUE, download the dataset, whether a cached version exist or not.
outputDir A character. Optional, specifies directory to download nonstandard datasets. If
NULL, data will be downloaded to dataDir, set with setDataDir(dataDir). If dataDir is
not set, and outputDir is NULL, a tmp directory will be used.
... Extra arguments to be passed to labkey.selectRows
Method clearCache(): Clear cache. Remove downloaded datasets.
Usage:
DataSpaceStudy$clearCache()
Method getDatasetDescription(): Get variable information.
Usage:
DataSpaceStudy$getDatasetDescription(datasetName, outputDir = NULL)
Arguments:
datasetName A character. Name of the dataset to retrieve. Accepts the value in either the
"name" or "label" field from availableDatasets.
outputDir A character. Directory path.
Method setDataDir(): Set default directory to download non-integrated datasets. If no dataDir
is set, a tmp directory will be used.
Usage:
DataSpaceStudy$setDataDir(dataDir)
Arguments:
dataDir A character. Directory path.
Method refresh(): Refresh the study object to update available datasets and treatment info.
Usage:
DataSpaceStudy$refresh()
Method clone(): The objects of this class are cloneable with this method.
Usage:
DataSpaceStudy$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
See Also
connectDS DataSpaceConnection
Examples
## Not run:
# Create a connection (Initiate a DataSpaceConnection object)
con <- connectDS()
# Connect to cvd408 (Initiate a DataSpaceStudy object)
# https://dataspace.cavd.org/cds/CAVD/app.view#learn/learn/Study/cvd408?q=408
cvd408 <- con$getStudy("cvd408")
cvd408
# Retrieve Neutralizing antibody dataset (NAb) for cvd408 from DataSpace
NAb <- cvd408$getDataset("NAb")
# Get variable information of the NAb dataset
cvd408$getDatasetDescription("NAb")
# Take a look at cvd408's treatment arm information
cvd408$treatmentArm
# Clear cache of a study object
cvd408$clearCache()
# Connect to the NYVAC durability comparison group
# https://dataspace.cavd.org/cds/CAVD/app.view#group/groupsummary/220
nyvac <- con$getGroup(220)
# Connect to all studies
cvd <- con$getStudy("")
# Refresh the study object to update available datasets and treatment info
cvd$refresh()
## End(Not run)
getNetrcPath Get a default netrc file path
Description
Get a default netrc file path
Usage
getNetrcPath()
Value
A character vector containing the default netrc file path
Examples
getNetrcPath()
writeNetrc Write a netrc file
Description
Write a netrc file that is valid for accessing DataSpace.
Usage
writeNetrc(
login,
password,
netrcFile = NULL,
onStaging = FALSE,
overwrite = FALSE
)
Arguments
login A character. Email address used for logging in on DataSpace.
password A character. Password associated with the login.
netrcFile A character. Credentials will be written into that file. If left NULL, netrc will
be written into a temporary file.
onStaging A logical. Whether to connect to the staging server instead of the production
server.
overwrite A logical. Whether to overwrite the existing netrc file.
Details
The database is accessed with the user’s credentials. A netrc file storing login and password infor-
mation is required. See here for instruction on how to register and set DataSpace credential. By
default curl will look for the file in your home directory.
Value
A character vector containing the netrc file path
See Also
connectDS checkNetrc
Examples
# First, create an account in the DataSpace App and read the terms of use
# Next, create a netrc file using writeNetrc()
writeNetrc(
login = "<EMAIL>",
password = "yourSecretPassword"
)
# Specify `netrcFile = getNetrcPath()` to write netrc in the default path |
elasticsearch-transport | ruby | Ruby | Elasticsearch::Transport
===
**This library is part of the [`elasticsearch-ruby`](https://github.com/elasticsearch/elasticsearch-ruby/) package;
please refer to it, unless you want to use this library standalone.**
---
The `elasticsearch-transport` library provides a low-level Ruby client for connecting to an [Elasticsearch](http://elasticsearch.com) cluster.
It handles connecting to multiple nodes in the cluster, rotating across connections,
logging and tracing requests and responses, maintaining failed connections,
discovering nodes in the cluster, and provides an abstraction for data serialization and transport.
It does not handle calling the Elasticsearch API;
see the [`elasticsearch-api`](https://github.com/elasticsearch/elasticsearch-ruby/tree/master/elasticsearch-api) library.
The library is compatible with Ruby 1.9 or higher and with all versions of Elasticsearch since 0.90.
Features overview:
* Pluggable logging and tracing
* Pluggable connection selection strategies (round-robin, random, custom)
* Pluggable transport implementation, customizable and extendable
* Pluggable serializer implementation
* Request retries and dead connections handling
* Node reloading (based on cluster state) on errors or on demand
For optimal performance, use a HTTP library which supports persistent ("keep-alive") connections,
such as [patron](https://github.com/toland/patron) or [Typhoeus](https://github.com/typhoeus/typhoeus).
Just require the library (`require 'patron'`) in your code, and it will be automatically used.
Currently these libraries will be automatically detected and used:
* [Patron](https://github.com/toland/patron)
* [Typhoeus](https://github.com/typhoeus/typhoeus)
* [HTTPClient](https://rubygems.org/gems/httpclient)
* [Net::HTTP::Persistent](https://rubygems.org/gems/net-http-persistent)
**Note on [Typhoeus](https://github.com/typhoeus/typhoeus)**: You need to use v1.4.0 or up since older versions are not compatible with Faraday 1.0.
For detailed information, see example configurations [below](#transport-implementations).
Installation
---
Install the package from [Rubygems](https://rubygems.org):
```
gem install elasticsearch-transport
```
To use an unreleased version, either add it to your `Gemfile` for [Bundler](http://gembundler.com):
```
gem 'elasticsearch-transport', git: 'git://github.com/elasticsearch/elasticsearch-ruby.git'
```
or install it from a source code checkout:
```
git clone https://github.com/elasticsearch/elasticsearch-ruby.git cd elasticsearch-ruby/elasticsearch-transport bundle install rake install
```
Example Usage
---
In the simplest form, connect to Elasticsearch running on <http://localhost:9200>
without any configuration:
```
require 'elasticsearch/transport'
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new response = client.perform_request 'GET', '_cluster/health'
# => #<Elasticsearch::Transport::Transport::Response:0x007fc5d506ce38 @status=200, @body={ ... } >
```
Full documentation is available at <http://rubydoc.info/gems/elasticsearch-transport>.
Configuration
---
* [Setting Hosts](#setting-hosts)
* [Default port](#default-port)
* [Connect using an Elastic Cloud ID](#connect-using-an-elastic-cloud-id)
* [Authentication](#authentication)
* [Logging](#logging)
* [Custom HTTP Headers](#custom-http-headers)
* [Identifying running tasks with X-Opaque-Id](#identifying-running-tasks-with-x-opaque-id)
* [Setting Timeouts](#setting-timeouts)
* [Randomizing Hosts](#randomizing-hosts)
* [Retrying on Failures](#retrying-on-failures)
* [Reloading Hosts](#reloading-hosts)
* [Connection Selector](#connection-selector)
* [Transport Implementations](#transport-implementations)
* [Serializer implementations](#serializer-implementations)
* [Exception Handling](#exception-handling)
* [Development and Community](#development-and-community)
The client supports many configurations options for setting up and managing connections,
configuring logging, customizing the transport library, etc.
### Setting Hosts
To connect to a specific Elasticsearch host:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new host: 'search.myserver.com'
```
To connect to a host with specific port:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new host: 'myhost:8080'
```
To connect to multiple hosts:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: ['myhost1', 'myhost2']
```
Instead of Strings, you can pass host information as an array of Hashes:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: [ { host: 'myhost1', port: 8080 }, { host: 'myhost2', port: 8080 } ]
```
**NOTE:** When specifying multiple hosts, you probably want to enable the `retry_on_failure` option to perform a failed request on another node (see the *Retrying on Failures* chapter).
Common URL parts -- scheme, HTTP authentication credentials, URL prefixes, etc -- are handled automatically:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new url: 'https://username:[[email protected]](/cdn-cgi/l/email-protection):4430/search'
```
You can pass multiple URLs separated by a comma:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new urls: 'http://localhost:9200,http://localhost:9201'
```
Another way to configure the URL(s) is to export the `ELASTICSEARCH_URL` variable.
The client will automatically round-robin across the hosts
(unless you select or implement a different [connection selector](#connection-selector)).
### Default port
The default port is `9200`. Please specify a port for your host(s) if they differ from this default.
Please see below for an exception to this when connecting using an Elastic Cloud ID.
### Connect using an Elastic Cloud ID
If you are using [Elastic Cloud](https://www.elastic.co/cloud), you can provide your cloud id to the client.
You must supply your username and password separately, and optionally a port. If no port is supplied,
port 443 will be used.
Note: Do not enable sniffing when using Elastic Cloud. The nodes are behind a load balancer so Elastic Cloud will take care of everything for you.
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(cloud_id: 'name:bG9jYWxob3N0JGFiY2QkZWZnaA==', user: 'elastic', password: 'changeme')
```
### Authentication
You can pass the authentication credentials, scheme and port in the host configuration hash:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: [
{ host: 'my-protected-host',
port: '443',
user: 'USERNAME',
password: 'PASSWORD',
scheme: 'https'
} ]
```
... or simply use the common URL format:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new url: 'https://username:[[email protected]](/cdn-cgi/l/email-protection):9200'
```
To pass a custom certificate for SSL peer verification to Faraday-based clients,
use the `transport_options` option:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new url: 'https://username:[[email protected]](/cdn-cgi/l/email-protection):9200',
transport_options: { ssl: { ca_file: '/path/to/cacert.pem' } }
```
You can also use [**API Key authentication**](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html):
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(
host: host,
transport_options: transport_options,
api_key: credentials
)
```
Where credentials is either the base64 encoding of `id` and `api_key` joined by a colon or a hash with the `id` and `api_key`:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(
host: host,
transport_options: transport_options,
api_key: {id: 'my_id', api_key: 'my_api_key'}
)
```
### Logging
To log requests and responses to standard output with the default logger (an instance of Ruby's Logger class), set the `log` argument to true:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(log: true)
```
You can also use [ecs-logging](https://github.com/elastic/ecs-logging-ruby). `ecs-logging` is a set of libraries that allows you to transform your application logs to structured logs that comply with the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current/ecs-reference.html):
```
logger = EcsLogging::Logger.new($stdout)
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(logger: logger)
```
To trace requests and responses in the *Curl* format, set the `trace` argument:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(trace: true)
```
You can customize the default logger or tracer:
```
client.transport.logger.formatter = proc { |s, d, p, m| "#{s}: #{m}\n" }
client.transport.logger.level = Logger::INFO
```
Or, you can use a custom `::Logger` instance:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(logger: Logger.new(STDERR))
```
You can pass the client any conforming logger implementation:
```
require 'logging' # https://github.com/TwP/logging/
log = Logging.logger['elasticsearch']
log.add_appenders Logging.appenders.stdout log.level = :info
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(logger: log)
```
### Custom HTTP Headers
You can set a custom HTTP header on the client's initializer:
```
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(
transport_options: {
headers:
{user_agent: "My App"}
}
)
```
You can also pass in `headers` as a parameter to any of the API Endpoints to set custom headers for the request:
```
client.search(index: 'myindex', q: 'title:test', headers: {user_agent: "My App"})
```
### Identifying running tasks with X-Opaque-Id
The X-Opaque-Id header allows to track certain calls, or associate certain tasks with the client that started them ([more on the Elasticsearch docs](https://www.elastic.co/guide/en/elasticsearch/reference/master/tasks.html#_identifying_running_tasks)). To use this feature, you need to set an id for `opaque_id` on the client on each request. Example:
```
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new client.search(index: 'myindex', q: 'title:test', opaque_id: '123456')
```
The search request will include the following HTTP Header:
```
X-Opaque-Id: 123456
```
You can also set a prefix for X-Opaque-Id when initializing the client. This will be prepended to the id you set before each request if you're using X-Opaque-Id. Example:
```
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(opaque_id_prefix: 'eu-west1')
client.search(index: 'myindex', q: 'title:test', opaque_id: '123456')
```
The request will include the following HTTP Header:
```
X-Opaque-Id: eu-west1_123456
```
### Setting Timeouts
For many operations in Elasticsearch, the default timeouts of HTTP libraries are too low.
To increase the timeout, you can use the `request_timeout` parameter:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new request_timeout: 5*60
```
You can also use the `transport_options` argument documented below.
### Randomizing Hosts
If you pass multiple hosts to the client, it rotates across them in a round-robin fashion, by default.
When the same client would be running in multiple processes (eg. in a Ruby web server such as Thin),
it might keep connecting to the same nodes "at once". To prevent this, you can randomize the hosts collection on initialization and reloading:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: ['localhost:9200', 'localhost:9201'], randomize_hosts: true
```
### Retrying on Failures
When the client is initialized with multiple hosts, it makes sense to retry a failed request on a different host:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: ['localhost:9200', 'localhost:9201'], retry_on_failure: true
```
By default, the client will retry the request 3 times. You can specify how many times to retry before it raises an exception by passing a number to `retry_on_failure`:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: ['localhost:9200', 'localhost:9201'], retry_on_failure: 5
```
These two parameters can also be used together:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: ['localhost:9200', 'localhost:9201'], retry_on_status: [502, 503], retry_on_failure: 10
```
### Reloading Hosts
Elasticsearch by default dynamically discovers new nodes in the cluster. You can leverage this in the client, and periodically check for new nodes to spread the load.
To retrieve and use the information from the
[*Nodes Info API*](http://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-info.html)
on every 10,000th request:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: ['localhost:9200', 'localhost:9201'], reload_connections: true
```
You can pass a specific number of requests after which the reloading should be performed:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: ['localhost:9200', 'localhost:9201'], reload_connections: 1_000
```
To reload connections on failures, use:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: ['localhost:9200', 'localhost:9201'], reload_on_failure: true
```
The reloading will timeout if not finished under 1 second by default. To change the setting:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: ['localhost:9200', 'localhost:9201'], sniffer_timeout: 3
```
**NOTE:** When using reloading hosts ("sniffing") together with authentication, just pass the scheme,
user and password with the host info -- or, for more clarity, in the `http` options:
```
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new host: 'localhost:9200',
http: { scheme: 'https', user: 'U', password: 'P' },
reload_connections: true,
reload_on_failure: true
```
### Connection Selector
By default, the client will rotate the connections in a round-robin fashion, using the
[Elasticsearch::Transport::Transport::Connections::Selector::RoundRobin](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Connections/Selector/RoundRobin "Elasticsearch::Transport::Transport::Connections::Selector::RoundRobin (class)") strategy.
You can implement your own strategy to customize the behaviour. For example,
let's have a "rack aware" strategy, which will prefer the nodes with a specific
[attribute](https://github.com/elasticsearch/elasticsearch/blob/1.0/config/elasticsearch.yml#L81-L85).
Only when these would be unavailable, the strategy will use the other nodes:
```
class RackIdSelector
include [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::[Transport](/gems/elasticsearch-transport/Elasticsearch/Transport "Elasticsearch::Transport (module)")::[Transport](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport "Elasticsearch::Transport::Transport (module)")::[Connections](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Connections "Elasticsearch::Transport::Transport::Connections (module)")::[Selector](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Connections/Selector "Elasticsearch::Transport::Transport::Connections::Selector (module)")::[Base](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Connections/Selector/Base "Elasticsearch::Transport::Transport::Connections::Selector::Base (module)")
def select(options={})
connections.select do |c|
# Try selecting the nodes with a `rack_id:x1` attribute first
c.host[:attributes] && c.host[:attributes][:rack_id] == 'x1'
end.sample || connections.to_a.sample
end end
[Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new hosts: ['x1.search.org', 'x2.search.org'], selector_class: RackIdSelector
```
### Transport Implementations
By default, the client will use the [*Faraday*](https://rubygems.org/gems/faraday) HTTP library as a transport implementation.
It will auto-detect and use an *adapter* for *Faraday* based on gems loaded in your code,
preferring HTTP clients with support for persistent connections.
To use the [*Patron*](https://github.com/toland/patron) HTTP, for example, just require it:
```
require 'patron'
```
Then, create a new client, and the *Patron* gem will be used as the "driver":
```
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new
client.transport.connections.first.connection.builder.adapter
# => Faraday::Adapter::Patron
10.times do
client.nodes.stats(metric: 'http')['nodes'].values.each do |n|
puts "#{n['name']} : #{n['http']['total_opened']}"
end end
# => Stiletoo : 24
# => Stiletoo : 24
# => Stiletoo : 24
# => ...
```
To use a specific adapter for *Faraday*, pass it as the `adapter` argument:
```
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new adapter: :net_http_persistent
client.transport.connections.first.connection.builder.handlers
# => [Faraday::Adapter::NetHttpPersistent]
```
To pass options to the
[`Faraday::Connection`](https://github.com/lostisland/faraday/blob/master/lib/faraday/connection.rb)
constructor, use the `transport_options` key:
```
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new transport_options: {
request: { open_timeout: 1 },
headers: { user_agent: 'MyApp' },
params: { :format => 'yaml' },
ssl: { verify: false }
}
```
To configure the *Faraday* instance directly, use a block:
```
require 'patron'
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new(host: 'localhost', port: '9200') do |f|
f.response :logger
f.adapter :patron end
```
You can use any standard Faraday middleware and plugins in the configuration block. You can also initialize the transport class yourself, and pass it to the client constructor as the `transport` argument:
```
require 'patron'
transport_configuration = lambda do |f|
f.response :logger
f.adapter :patron end
transport = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::[Transport](/gems/elasticsearch-transport/Elasticsearch/Transport "Elasticsearch::Transport (module)")::[Transport](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport "Elasticsearch::Transport::Transport (module)")::[HTTP](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/HTTP "Elasticsearch::Transport::Transport::HTTP (module)")::[Faraday](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/HTTP/Faraday "Elasticsearch::Transport::Transport::HTTP::Faraday (class)").[new](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Base#initialize-instance_method "Elasticsearch::Transport::Transport::Base#initialize (method)") \
hosts: [ { host: 'localhost', port: '9200' } ],
&transport_configuration
# Pass the transport to the client
#
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new transport: transport
```
Instead of passing the transport to the constructor, you can inject it at run time:
```
# Set up the transport
#
faraday_configuration = lambda do |f|
f.instance_variable_set :@ssl, { verify: false }
f.adapter :excon end
faraday_client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::[Transport](/gems/elasticsearch-transport/Elasticsearch/Transport "Elasticsearch::Transport (module)")::[Transport](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport "Elasticsearch::Transport::Transport (module)")::[HTTP](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/HTTP "Elasticsearch::Transport::Transport::HTTP (module)")::[Faraday](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/HTTP/Faraday "Elasticsearch::Transport::Transport::HTTP::Faraday (class)").[new](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Base#initialize-instance_method "Elasticsearch::Transport::Transport::Base#initialize (method)") \
hosts: [ { host: 'my-protected-host',
port: '443',
user: 'USERNAME',
password: 'PASSWORD',
scheme: 'https'
}],
&faraday_configuration
# Create a default client
#
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new
# Inject the transport to the client
#
client.transport = faraday_client
```
You can also use a bundled [*Curb*](https://rubygems.org/gems/curb) based transport implementation:
```
require 'curb'
require 'elasticsearch/transport/transport/http/curb'
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new transport_class: [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::[Transport](/gems/elasticsearch-transport/Elasticsearch/Transport "Elasticsearch::Transport (module)")::[Transport](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport "Elasticsearch::Transport::Transport (module)")::[HTTP](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/HTTP "Elasticsearch::Transport::Transport::HTTP (module)")::[Curb](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/HTTP/Curb "Elasticsearch::Transport::Transport::HTTP::Curb (class)")
client.transport.connections.first.connection
# => #<Curl::Easy http://localhost:9200/>
```
It's possible to customize the *Curb* instance by passing a block to the constructor as well
(in this case, as an inline block):
```
transport = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::[Transport](/gems/elasticsearch-transport/Elasticsearch/Transport "Elasticsearch::Transport (module)")::[Transport](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport "Elasticsearch::Transport::Transport (module)")::[HTTP](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/HTTP "Elasticsearch::Transport::Transport::HTTP (module)")::[Curb](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/HTTP/Curb "Elasticsearch::Transport::Transport::HTTP::Curb (class)").[new](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Base#initialize-instance_method "Elasticsearch::Transport::Transport::Base#initialize (method)") \
hosts: [ { host: 'localhost', port: '9200' } ],
& lambda { |c| c.verbose = true }
client = [Elasticsearch](/gems/elasticsearch-transport/Elasticsearch "Elasticsearch (module)")::Client.new transport: transport
```
You can write your own transport implementation easily, by including the
[Elasticsearch::Transport::Transport::Base](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Base "Elasticsearch::Transport::Transport::Base (module)") module, implementing the required contract,
and passing it to the client as the `transport_class` parameter -- or injecting it directly.
### Serializer Implementations
By default, the [MultiJSON](http://rubygems.org/gems/multi_json) library is used as the serializer implementation, and it will pick up the "right" adapter based on gems available.
The serialization component is pluggable, though, so you can write your own by including the
[Elasticsearch::Transport::Transport::Serializer::Base](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Serializer/Base "Elasticsearch::Transport::Transport::Serializer::Base (module)") module, implementing the required contract,
and passing it to the client as the `serializer_class` or `serializer` parameter.
### Exception Handling
The library defines a [number of exception classes](https://github.com/elasticsearch/elasticsearch-ruby/blob/master/elasticsearch-transport/lib/elasticsearch/transport/transport/errors.rb)
for various client and server errors, as well as unsuccessful HTTP responses,
making it possible to `rescue` specific exceptions with desired granularity.
The highest-level exception is [Elasticsearch::Transport::Transport::Error](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Error "Elasticsearch::Transport::Transport::Error (class)")
and will be raised for any generic client *or* server errors.
[Elasticsearch::Transport::Transport::ServerError](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/ServerError "Elasticsearch::Transport::Transport::ServerError (class)") will be raised for server errors only.
As an example for response-specific errors, a `404` response status will raise an Elasticsearch::Transport::Transport::Errors::NotFound exception.
Finally, [Elasticsearch::Transport::Transport::SnifferTimeoutError](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/SnifferTimeoutError "Elasticsearch::Transport::Transport::SnifferTimeoutError (class)") will be raised when connection reloading ("sniffing") times out.
Development and Community
---
For local development, clone the repository and run `bundle install`. See `rake -T` for a list of available Rake tasks for running tests, generating documentation, starting a testing cluster, etc.
Bug fixes and features must be covered by unit tests. Integration tests are written in Ruby 1.9 syntax.
Github's pull requests and issues are used to communicate, send bug reports and code contributions.
The Architecture
---
* [Elasticsearch::Transport::Client](/gems/elasticsearch-transport/Elasticsearch/Transport/Client "Elasticsearch::Transport::Client (class)") is composed of [Elasticsearch::Transport::Transport](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport "Elasticsearch::Transport::Transport (module)")
* [Elasticsearch::Transport::Transport](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport "Elasticsearch::Transport::Transport (module)") is composed of [Elasticsearch::Transport::Transport::Connections](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Connections "Elasticsearch::Transport::Transport::Connections (module)"),
and an instance of logger, tracer, serializer and sniffer.
* Logger and tracer can be any object conforming to Ruby logging interface,
ie. an instance of [`Logger`](http://www.ruby-doc.org/stdlib-1.9.3/libdoc/logger/rdoc/Logger.html),
[*log4r*](https://rubygems.org/gems/log4r), [*logging*](https://github.com/TwP/logging/), etc.
* The [Elasticsearch::Transport::Transport::Serializer::Base](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Serializer/Base "Elasticsearch::Transport::Transport::Serializer::Base (module)") implementations handle converting data for Elasticsearch
(eg. to JSON). You can implement your own serializer.
* [Elasticsearch::Transport::Transport::Sniffer](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Sniffer "Elasticsearch::Transport::Transport::Sniffer (class)") allows to discover nodes in the cluster and use them as connections.
* [Elasticsearch::Transport::Transport::Connections::Collection](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Connections/Collection "Elasticsearch::Transport::Transport::Connections::Collection (class)") is composed of
[Elasticsearch::Transport::Transport::Connections::Connection](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Connections/Connection "Elasticsearch::Transport::Transport::Connections::Connection (class)") instances and a selector instance.
* [Elasticsearch::Transport::Transport::Connections::Connection](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Connections/Connection "Elasticsearch::Transport::Transport::Connections::Connection (class)") contains the connection attributes such as hostname and port,
as well as the concrete persistent "session" connected to a specific node.
* The [Elasticsearch::Transport::Transport::Connections::Selector::Base](/gems/elasticsearch-transport/Elasticsearch/Transport/Transport/Connections/Selector/Base "Elasticsearch::Transport::Transport::Connections::Selector::Base (module)") implementations allow to choose connections from the pool, eg. in a round-robin or random fashion. You can implement your own selector strategy.
Development
---
To work on the code, clone and bootstrap the main repository first --
please see instructions in the main [README](../README.md#development).
To run tests, launch a testing cluster and use the Rake tasks:
```
time rake test:unit time rake test:integration
```
Use `COVERAGE=true` before running a test task to check coverage with Simplecov.
License
---
This software is licensed under the [Apache 2 license](./LICENSE). |
UStatBookABSC | cran | R | Package ‘UStatBookABSC’
October 12, 2022
Title A Companion Package to the Book ``U-Statistics, M-Estimation and
Resampling''
Version 1.0.0
Author <NAME> <<EMAIL>>
Maintainer <NAME> <<EMAIL>>
Description A set of functions leading to multivariate response L1 regression.
This includes functions on computing Euclidean inner products and norms,
weighted least squares estimates on multivariate responses, function to compute
fitted values and residuals. This package is a companion to the book ``U-Statistics,
M-estimation and Resampling'', by <NAME> and <NAME>, to appear
in 2017 as part of the ``Texts and Readings in Mathematics'' (TRIM) series of
Hindustan Book Agency and Springer-Verlag.
Depends R (>= 3.2.3)
Suggests MASS
License GPL-3
Encoding UTF-8
LazyData true
RoxygenNote 5.0.1.9000
NeedsCompilation no
Repository CRAN
Date/Publication 2016-12-27 17:50:42
R topics documented:
CCU12_Preci... 2
FitAndResidual... 2
IdentityMatri... 3
InnerProduc... 4
L1Regressio... 4
Nor... 5
WL... 6
CCU12_Precip Precipitation for June-September 2012 recorded in Kolkata
Description
Precipitation for June-September 2012 recorded in Kolkata
Usage
data(CCU12_Precip)
Format
A data frame with columns
Date The data in Year-Month-Day format
Precip Precipitation in millimeters
TMax Maximum temperature, in Celcius
TMin Minimum temperature, in Celcius
Examples
Precip <-CCU12_Precip$Precip
TMax <-CCU12_Precip$TMax
plot(TMax, Precip)
FitAndResiduals Computes a linear regression fit and residuals on possibly multivariate
responses
Description
Computes a linear regression fit and residuals on possibly multivariate responses
Usage
FitAndResiduals(Y, X, BetaHat)
Arguments
Y a numeric matrix, to act as response
X a numeric matrix, to act as covariates
BetaHat a numeric matrix, to act as slope
Value
a list consisting of two vectors, the fitted values and residuals
Examples
## Not run:
DataY = cbind(CCU12_Precip$Precip, CCU12_Precip$TMax);
DataX = cbind(rep(1, length(CCU12_Precip$Precip)), CCU12_Precip$TMin)
BetaHat.New = WLS(DataY, DataX)
Results.New = FitAndResiduals(DataY, DataX, BetaHat.New);
## End(Not run)
IdentityMatrix Obtains the identity matrix of dimension n
Description
Obtains the identity matrix of dimension n
Usage
IdentityMatrix(n)
Arguments
n an integer
Value
an identity matrix
Examples
I.3 = IdentityMatrix(3)
print(I.3)
InnerProduct Computes the Euclidean inner product
Description
Computes the Euclidean inner product
Usage
InnerProduct(a, b, na.rm)
Arguments
a a numeric vector
b another numeric vector
na.rm logical
Value
a real number
Examples
x <- c(1, 2, 3)
y <- c(3, 0, 1)
InnerProduct(x, y)
L1Regression Computes a L1 multivariate regression This is the equivalent of me-
dian regression when the response is possibly multivariate
Description
Computes a L1 multivariate regression This is the equivalent of median regression when the re-
sponse is possibly multivariate
Usage
L1Regression(Data.Y, Data.X, Weights,
InitialValue = "WLS", MaxIteration, epsilon, lambda)
Arguments
Data.Y a numeric matrix, to act as response
Data.X a numeric matrix, to act as covariates
Weights a numeric matrix, to act as weights
InitialValue a character, to denote how the initial estimate will be computed currently the
only available option is WLS
MaxIteration an integer, for the maximum number of iterations allowed
epsilon a positive real number, as tolerance value for convergence
lambda a real number between 0 and 1, to control the amount of update allowed in each
iteration
Value
a list consisting of the iteration value at the last step, the difference in norms between the last two
iterations, and the estimate of slope
Examples
## Not run:
DataY = cbind(CCU12_Precip$Precip, CCU12_Precip$TMax);
DataX = cbind(rep(1, length(CCU12_Precip$Precip)), CCU12_Precip$TMin)
A2 = L1Regression(DataY, DataX)
## End(Not run)
Norm Computes the Euclidean norm
Description
Computes the Euclidean norm
Usage
Norm(a, na.rm)
Arguments
a a numeric vector
na.rm logical
Value
a real number
Examples
x <- c(1, 2)
Norm(x)
WLS Computes a weighted least squares linear regression on possibly mul-
tivariate responses
Description
Computes a weighted least squares linear regression on possibly multivariate responses
Usage
WLS(Y, X, W)
Arguments
Y a numeric matrix, to act as response
X a numeric matrix, to act as covariates
W a numeric matrix, to act as weights
Value
a vector of regression coefficients
Examples
## Not run:
DataY = cbind(CCU12_Precip$Precip, CCU12_Precip$TMax);
DataX = cbind(rep(1, length(CCU12_Precip$Precip)), CCU12_Precip$TMin)
BetaHat.New = WLS(DataY, DataX)
## End(Not run) |
INLABMA | cran | R | Package ‘INLABMA’
October 12, 2022
Version 0.1-11
Date 2018-07-26
Encoding latin1
Title Bayesian Model Averaging with INLA
Author
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
Maintainer <NAME> <<EMAIL>>
Depends R(>= 2.15.0), parallel, sp
Imports Matrix, spdep
Suggests INLA
Description Fit Spatial Econometrics models using Bayesian model averaging
on models fitted with INLA. The INLA package can be obtained from
<http://www.r-inla.org>.
License GPL (>= 2)
Additional_repositories https://inla.r-inla-download.org/R/stable/
RoxygenNote 6.0.1
NeedsCompilation no
Repository CRAN
Date/Publication 2018-07-26 22:10:06 UTC
R topics documented:
BMArh... 2
fitmar... 3
fitmargBM... 4
INLABM... 5
INLAM... 6
leroux.inl... 7
logprrh... 8
mysplinefu... 9
recompute.impact... 10
rescalemar... 11
sem.inl... 12
trIrhoWin... 15
BMArho Compute BMA of fitted.values from a list of INLA objects
Description
This functions performs a weighted average of the component fitted.values from a list of INLA
objects.
Usage
BMArho(models, rho, logrhoprior = rep(1, length(rho)))
Arguments
models List of INLA models fitted for different values of rho
rho Vector fo values of rho used to compute the list in models.
logrhoprior Log-prior density for each value of rho.
Details
The different fitted.values are weighted using the values of the marginal likelihood of the fitted
models and the prior of parameter rho. rho is a parameter that is fixed when computing models
and that have a log-prior density defined in pogrhoprior.
Value
Vector of averaged fitted values.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>, <NAME>, H<NAME> (2014). Approximate Bayesian inference for
spatial econometrics models. Spatial Statistics, Volume 9, 146-165.
<NAME>, <NAME>, H<NAME> (2015). Spatial Data Analysis with R-INLA
with Some Extensions. Journal of Statistical Software, 63(20), 1-31. URL http://www.jstatsoft.org/v63/i20/.
See Also
INLABMA
fitmarg Fit posterior marginal distributions to points
Description
Compute (and re-scale, if necessary) the marginal from a set of points x and values of log-likelihood
logy and log-prior density logp.
Usage
fitmarg(x, logy, logp = 0, usenormal = FALSE)
Arguments
x Values of the random variable.
logy Log-likelihood.
logp Log-prior density.
usenormal Whether use a Normal distribution for the fitted marginal.
Details
Fits a marginal at a set of points x from their log-likelihood and log-prior. The fitted marginal is
re-scaled to integrate one if necessary. If usenormal=TRUE then the fitted marginal is supposed to
be Normal, which is computed using the posterior mean and standard deviation of x.
Value
A function with the fitted marginal is returned.
Author(s)
<NAME> <<EMAIL>>
See Also
fitmargBMA, fitmargBMA2,mysplinefun
fitmargBMA Compute marginals using Bayesian Model Averaging
Description
fitmargBMA takes a list of marginal distributions and weights (presumably, based on some marginal
likelihoods) and computes a final distribution by weighting.
fitmargBMA2 takes a list of INLA models and computes Bayesian Model Averaging on some of
their components.
fitmatrixBMA performs averaging on a list of matrices.
fitlistBMA performs averaging of elements in lists.
Usage
fitmargBMA(margs, ws, len = 100)
fitmargBMA2(models, ws, item)
fitmatrixBMA(models, ws, item)
fitlistBMA(models, ws, item)
Arguments
margs List of 2-column matrices with the values of the (marginal) distributions.
models List of INLA models to be averaged.
ws Vector of weights. They do not need to sum up to one.
len Length of the x-vector to compute the weighted distribution.
item Name of the elements of an INLA object to be used in the Model Averaging.
Details
For fitmargBMA, distributions provided are averaging according to the weights provided. A new
probability distribution is obtained.
fitmargBMA2 uses a list of INLA models to compute Model Averaging on some of their components
(for example, the fitted values).
fitmatrixBMA performs averaging on a list of matrices.
fitlistBMA performs averaging of a list of a list of matrices.
Value
fitmargBMA returns a 2-column matrix with the weighted marginal distribution.
fitmargBMA2 returns a list of weighted components.
fitmatrixBMA returns a matrix.
fitlistBMA returns a list.
Author(s)
<NAME> <<EMAIL>>
INLABMA Perform complete Bayesian Model Averaging on some Spatial Econo-
metrics models
Description
This function performs Bayesian Model Averaging on a list of different Spatial Econometrics mod-
els. These models have been computed under different values of the spatial autocorrelation param-
eter rho.
Usage
INLABMA(models, rho, logrhoprior = rep(1, length(rho)), impacts = FALSE,
usenormal = FALSE)
Arguments
models List of INLA models, computed for different values of rho.
rho A vector with the values of rho used to compute models.
logrhoprior Vector with the values of the log-prior density of rho.
impacts Logical. Whether impacts should be computed.
usenormal Logical. Whether the posterior marginal of rho is assumed to be Gaussian.
Details
This functions perfomrs BMA on most of the compponents of an INLA model using the marginal
likelihoods of the models and the provided log-prior density of rho.
Value
A list with the averaged components. Another component called rho is added, with its posterior
marginal and some other summary information.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>, <NAME>, Håvard Rue (2014). Approximate Bayesian inference for
spatial econometrics models. Spatial Statistics, Volume 9, 146-165.
<NAME>, <NAME>, Håvard Rue (2015). Spatial Data Analysis with R-INLA
with Some Extensions. Journal of Statistical Software, 63(20), 1-31. URL http://www.jstatsoft.org/v63/i20/.
See Also
sem.inla, slm.inla, sdm.inla
INLAMH Perform INLA with MCMC.
Description
This function implements the Metropolis-Hastings algorithm using repeated calls to R-INLA to fint
conditional model on the current state of the MCMC simulations.
Usage
INLAMH(d, fit.inla, b.init, rq, dq, prior, n.sim = 200, n.burnin = 100,
n.thin = 1, n.errors = 20, verbose = FALSE)
Arguments
d Data.frame with the data used to fit the model with R-INLA.
fit.inla A function used to fit the model with R-INLA. It should take at least two argu-
ments: a data.frame (first) and an object with the actual value of the sampled
parameters. This function must return a vector of two components: model.sim
(an ’inla’ object with the fitted model) and ’mlik’ (the marginal likelihood as
returned by INLA in model.sim$mlik).
b.init Initial values of the model parameters for the Metropolis-Hastings algorithm.
rq Sampling from the proposal distribution. It must take one argument: the current
state of the Markov chain.
dq Density of the proposal distribution. It takes two arguments: current state and
proposed new state.
prior Prior distribution of the model parameters.
n.sim Total of simulations to be done.
n.burnin Number of burn-in simulation (thinning is ignored here).
n.thin Thinning to be applied to the simulations after burn-in.
n.errors This is the number of errores allowed when calling inla().
verbose Whether to show some running information or not (defaut to FALSE).
Details
This function implements the Metropolis-Hastings algorithm using INLA (i.e., INLA within MCMC)
at every step. In practice, only a few of the model parameters are sampled in the MCMC steps and
the posterior marginal of the remainder of parameters is obtained by Bayesian model averaging of
the conditional marginals returned by R-INLA at each step of the Metropolis-Hastings algorithm.
Value
A list with three components:
acc.sim A vector of logical values (of length ’n.sim’) showing whether a given proposal
has been accepted or not. This is useful to compute the acceptance rate.
model.sim A list with the models fitted, as returned by fit.inla().
b.sim List of all sampled values of the models parameters. It is a list beacuse the
sampled values can be vectors.
Author(s)
<NAME>.
References
<NAME> and <NAME> (2017). Markov Chain Monte Carlo with the Integrated
Nested Laplace Approximation. https://arxiv.org/abs/1210.4908.
leroux.inla Fit Leroux et al’s spatial model.
Description
This function fits the model by Leroux et al. for a given value of the parameter lambda, i.e., the
mixture parameter that appears in the variance..
Usage
leroux.inla(formula, d, W, lambda, improve = TRUE, fhyper = NULL, ...)
Arguments
formula Formula of the fixed effects.
d A data.frame with the data to be used.
W Adjacency matrix.
lambda Parameter used in the mixture of the two precission matrices.
improve Logical. Whether to improve the fitted models to obtain better estimates of the
marginal likelihoods.
fhyper Extra arguments passed to the definition of the hyperparameters.
... Extra arguments passed to function inla.
Details
This function fits the model proposed by Leroux et al. (1999) for a given value of parameter lambda.
This parameter controls the mixture between a diagonal precission (lambda=1) and an intrinsic
CAR precission (lambda=0).
The marginal log-likelihood is corrected to add half the log-determinant of the precission matrix.
Value
An INLA object.
Author(s)
<NAME> <<EMAIL>>
References
Leroux B, Lei X, Breslow N (1999). Estimation of Disease Rates in Small Areas: A New Mixed
Model for Spatial Dependence. In M Halloran, D Berry (eds.), Statistical Models in Epidemiology,
the Environment and Clinical Trials, pp. 135-178. Springer-Verlag, New York.
<NAME>, <NAME>, <NAME> (2014). Approximate Bayesian inference for
spatial econometrics models. Spatial Statistics, Volume 9, 146-165.
<NAME>, <NAME>, <NAME> (2015). Spatial Data Analysis with R-INLA
with Some Extensions. Journal of Statistical Software, 63(20), 1-31. URL http://www.jstatsoft.org/v63/i20/.
See Also
sem.inla,slm.inla,sdm.inla
logprrho Log-prior density for the spatial autocorrelation parameter rho
Description
Compute log-prior density for rho
Usage
logprrho(rho)
Arguments
rho The value to compute the log-density.
Details
This function computes the log-density of the prior for rho according to logit(rho) ~ N(0, prec=.1).
THis is one of the default priors in R-INLA for spatial autocorrelation parameters.
Value
Numerical.
Author(s)
<NAME> <<EMAIL>>
Examples
rrho<-seq(.01, .99, length.out=100)
plot(rrho, exp(logprrho(rrho)))
mysplinefun Compute spline function
Description
This function is similar to splinefun but it returns 0 outside the range of x.
Usage
mysplinefun(x, y = NULL, method = c("fmm", "periodic", "natural", "monoH.FC")[1],
ties = mean)
Arguments
x x-values to use in the interpolation.
y y-values to use in the interpolation (optional).
method Method used to compute the spline. See splinefun for details.
ties Handling of tied ’x’ values. See splinefun for details.
Details
This function calls splinefun and returns a function with the fitted spline. The main difference is
that this new function returns 0 outside the range of 0.
Value
Returns a function with x and deriv arguments. See splinefun for details.
Author(s)
<NAME> <<EMAIL>>
See Also
splinefun
recompute.impacts Recompute the impact summaries from the marginals
Description
This functions recomputes the impacts summaries using the (approximated) marginals rather than
by weighting on the different summaries.
Usage
recompute.impacts(obj, impacts = c("total", "direct", "indirect"))
Arguments
obj Object with a resulting model obtained by Bayesian Model Averaging with IN-
LABMA.
impacts Types of impacts to recompute.
Details
This function uses the impacts marginals to compute some summary statistics. By default, the
summary of the impacts is obtained by weighting the different summaries used in Bayesian MOdel
Averaging with function INLABMA.
Value
Original object with the updated summary statistics of the impacts.
Author(s)
<NAME> <<EMAIL>>
References
Bivand et al. (2013)
See Also
INLABMA
rescalemarg Re-scale marginal distribution to compute the distribution of w*x
Description
This function takes a marginal distribution (represetend by a 2-column matrix) and computes the
marginal distribution of w*x.
Usage
rescalemarg(xx, w)
Arguments
xx 2-column matrix with x and y-values.
w Weight to re-scale the y-values.
Details
This function simply re-scales
Value
A 2-column matrix with the new values of w*x and their associated probability densities. This is
also an object of classes inla.marginal.
Author(s)
<NAME> <<EMAIL>>
References
INLA
See Also
inla.tmarginal
Examples
if(requireNamespace("INLA", quietly = TRUE)) {
require(INLA)
x<-seq(-3,3, by=.01)
xx<-cbind(x, dnorm(x))
xx2<-rescalemarg(xx, 3)
plot(xx, type="l", xlim=c(-9,9))
lines(xx2, col="red")
}
sem.inla Fit spatial econometrics models with INLA
Description
These functions fit some spatial econometrics models for a given value of rho (the spatial autocor-
relation parameter). sem.inla fits a spatial error model, slm fits a spatial lag model and sdm.inla
fits a spatial Durbin model.
Usage
sem.inla(formula, d, W, rho, improve = TRUE, impacts = FALSE, fhyper = NULL,
probit = FALSE, ...)
slm.inla(formula, d, W, rho, mmatrix = NULL, improve = TRUE, impacts = FALSE,
fhyper = NULL, probit = FALSE, ...)
sdm.inla(formula, d, W, rho, mmatrix = NULL, intercept = TRUE, impacts = FALSE,
improve = TRUE, fhyper = NULL, probit = FALSE, ...)
sac.inla(formula, d, W.rho, W.lambda, rho, lambda, mmatrix = NULL,
improve = TRUE, impacts = FALSE, fhyper = NULL, probit = FALSE, ...)
Arguments
formula Formula with the response variable, the fixed effects and, possibly, other non-
linear effects.
d Data.frame with the data.
W Adjacency matrix.
rho Value of the spatial autocorrelation parameter. For the SAC model, spatial auto-
correlation term on the response.
W.rho For the SAC model, adjacency matrix associated to the autocorrelation on the
response.
W.lambda For the SAC model, adjacency matrix associated to the autocorrelation on the
error term.
lambda For the SAC model, spatial autocorrelation of the error term.
mmatrix Design matrix of fixed effects.
intercept Logical. Whether an intercept has been included in the model.
improve Logical. Whether improve model fitting (this may require more computing
time).
impacts Logical. Whether impacts are computed.
fhyper Options to be passed to the definition of the hyper-parameters in the spatial
effects.
probit Logical. Whether a probit model is used. Note this is only used when computing
the impacts and that argument family must be set accordingly.
... Other arguments passed to function inla.
Details
These functions fit a spatial econometrics model with a fixed value of the spatial autocorrelation
parameter rho.
In addition, the marginal -log-likelihood is corrected to account for the variance-covariance matrix
of the error term or random effects.
Value
An inla object.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>, <NAME>, H<NAME> (2014). Approximate Bayesian inference for
spatial econometrics models. Spatial Statistics, Volume 9, 146-165.
<NAME>, <NAME>, H<NAME> (2015). Spatial Data Analysis with R-INLA
with Some Extensions. Journal of Statistical Software, 63(20), 1-31. URL http://www.jstatsoft.org/v63/i20/.
<NAME> and <NAME> (2016). Spatial Models with the Integrated
Nested Laplace Approximation within Markov Chain Monte Carlo. Submitted.
See Also
leroux.inla
Examples
## Not run:
if(requireNamespace("INLA", quietly = TRUE)) {
require(INLA)
require(spdep)
data(columbus)
lw <- nb2listw(col.gal.nb, style="W")
#Maximum Likelihood (ML) estimation
colsemml <- errorsarlm(CRIME ~ INC + HOVAL, data=columbus, lw, method="eigen",
quiet=FALSE)
colslmml <- lagsarlm(CRIME ~ INC + HOVAL, data=columbus, lw, method="eigen",
type="lag", quiet=FALSE)
colsdmml <- lagsarlm(CRIME ~ INC + HOVAL, data=columbus, lw, method="eigen",
type="mixed", quiet=FALSE)
#Define grid on rho
rrho<-seq(-1, .95, length.out=40)
#Adjacency matrix
W <- as(as_dgRMatrix_listw(nb2listw(col.gal.nb)), "CsparseMatrix")
#Index for spatial random effects
columbus$idx<-1:nrow(columbus)
#Formula
form<- CRIME ~ INC + HOVAL
zero.variance = list(prec=list(initial = 25, fixed=TRUE))
seminla<-mclapply(rrho, function(rho){
sem.inla(form, d=columbus, W=W, rho=rho,
family = "gaussian", impacts=FALSE,
control.family = list(hyper = zero.variance),
control.predictor=list(compute=TRUE),
control.compute=list(dic=TRUE, cpo=TRUE),
control.inla=list(print.joint.hyper=TRUE),
#tolerance=1e-20, h=1e-6),
verbose=FALSE
)
})
slminla<-mclapply(rrho, function(rho){
slm.inla(form, d=columbus, W=W, rho=rho,
family = "gaussian", impacts=FALSE,
control.family = list(hyper = zero.variance),
control.predictor=list(compute=TRUE),
control.compute=list(dic=TRUE, cpo=TRUE),
control.inla=list(print.joint.hyper=TRUE),
#tolerance=1e-20, h=1e-6),
verbose=FALSE
)
})
sdminla<-mclapply(rrho, function(rho){
sdm.inla(form, d=columbus, W=W, rho=rho,
family = "gaussian", impacts=FALSE,
control.family = list(hyper = zero.variance),
control.predictor=list(compute=TRUE),
control.compute=list(dic=TRUE, cpo=TRUE),
control.inla=list(print.joint.hyper=TRUE),
#tolerance=1e-20, h=1e-6),
verbose=FALSE
)
})
#BMA using a uniform prior (in the log-scale) and using a Gaussian
#approximation to the marginal
sembma<-INLABMA(seminla, rrho, 0, usenormal=TRUE)
slmbma<-INLABMA(slminla, rrho, 0, usenormal=TRUE)
sdmbma<-INLABMA(sdminla, rrho, 0, usenormal=TRUE)
#Display results
plot(sembma$rho$marginal, type="l", ylim=c(0,5))
lines(slmbma$rho$marginal, lty=2)
lines(sdmbma$rho$marginal, lty=3)
#Add ML estimates
abline(v=colsemml$lambda, col="red")
abline(v=colslmml$rho, col="red", lty=2)
abline(v=colsdmml$rho, col="red", lty=3)
#Legend
legend(-1,5, c("SEM", "SLM", "SDM"), lty=1:3)
}
## End(Not run)
trIrhoWinv Compute trace of (I-rho*W)^-1 matrix
Description
This function computes (or estimates) the trace of matrix (I-rho*W)^-1, which is often needed when
computing impacts in some spatial econometrics models.
Usage
trIrhoWinv(W, rho, offset = 0, order = 20, direct = TRUE, Df = Matrix::Diagonal(nrow(W)))
Arguments
W Adjacency matrix. Usually, it is row-standardised.
rho Value of spatial autocorrelation parameter rho.
offset Number of times (I-rho*W)^-1 is multiplied by W (for sdm model).
order Order of Taylor expansion used in the approximation of the trace.
direct Use direct method, i.e., matrix multiplication, etc.
Df Diagonal matrix used to compute the impacts in the Probit model only used if
direct=TRUE.
Details
This function computes the trace of (I-rho*W)^-1, which is later used to computed the impacts.
This is an internal function.
Value
Numerica value.
Author(s)
<NAME> <<EMAIL>>
References
LeSage and Page (2008) Bivand et al. (2013)
See Also
sem.inla, slm.inla, sdm.inla |
rusoto_ram | rust | Rust | Crate rusoto_ram
===
Use AWS Resource Access Manager to share AWS resources between AWS accounts. To share a resource, you create a resource share, associate the resource with the resource share, and specify the principals that can access the resources associated with the resource share. The following principals are supported: AWS accounts, organizational units (OU) from AWS Organizations, and organizations from AWS Organizations.
For more information, see the AWS Resource Access Manager User Guide.
If you’re using the service, you’re probably looking for RamClient and Ram.
Structs
---
AcceptResourceShareInvitationRequestAcceptResourceShareInvitationResponseAssociateResourceSharePermissionRequestAssociateResourceSharePermissionResponseAssociateResourceShareRequestAssociateResourceShareResponseCreateResourceShareRequestCreateResourceShareResponseDeleteResourceShareRequestDeleteResourceShareResponseDisassociateResourceSharePermissionRequestDisassociateResourceSharePermissionResponseDisassociateResourceShareRequestDisassociateResourceShareResponseEnableSharingWithAwsOrganizationRequestEnableSharingWithAwsOrganizationResponseGetPermissionRequestGetPermissionResponseGetResourcePoliciesRequestGetResourcePoliciesResponseGetResourceShareAssociationsRequestGetResourceShareAssociationsResponseGetResourceShareInvitationsRequestGetResourceShareInvitationsResponseGetResourceSharesRequestGetResourceSharesResponseListPendingInvitationResourcesRequestListPendingInvitationResourcesResponseListPermissionsRequestListPermissionsResponseListPrincipalsRequestListPrincipalsResponseListResourceSharePermissionsRequestListResourceSharePermissionsResponseListResourceTypesRequestListResourceTypesResponseListResourcesRequestListResourcesResponsePrincipalDescribes a principal for use with AWS Resource Access Manager.
PromoteResourceShareCreatedFromPolicyRequestPromoteResourceShareCreatedFromPolicyResponseRamClientA client for the RAM API.
RejectResourceShareInvitationRequestRejectResourceShareInvitationResponseResourceDescribes a resource associated with a resource share.
ResourceShareDescribes a resource share.
ResourceShareAssociationDescribes an association with a resource share.
ResourceShareInvitationDescribes an invitation to join a resource share.
ResourceSharePermissionDetailInformation about an AWS RAM permission.
ResourceSharePermissionSummaryInformation about a permission that is associated with a resource share.
ServiceNameAndResourceTypeInformation about the shareable resource types and the AWS services to which they belong.
TagInformation about a tag.
TagFilterUsed to filter information based on tags.
TagResourceRequestTagResourceResponseUntagResourceRequestUntagResourceResponseUpdateResourceShareRequestUpdateResourceShareResponseEnums
---
AcceptResourceShareInvitationErrorErrors returned by AcceptResourceShareInvitation
AssociateResourceShareErrorErrors returned by AssociateResourceShare
AssociateResourceSharePermissionErrorErrors returned by AssociateResourceSharePermission
CreateResourceShareErrorErrors returned by CreateResourceShare
DeleteResourceShareErrorErrors returned by DeleteResourceShare
DisassociateResourceShareErrorErrors returned by DisassociateResourceShare
DisassociateResourceSharePermissionErrorErrors returned by DisassociateResourceSharePermission
EnableSharingWithAwsOrganizationErrorErrors returned by EnableSharingWithAwsOrganization
GetPermissionErrorErrors returned by GetPermission
GetResourcePoliciesErrorErrors returned by GetResourcePolicies
GetResourceShareAssociationsErrorErrors returned by GetResourceShareAssociations
GetResourceShareInvitationsErrorErrors returned by GetResourceShareInvitations
GetResourceSharesErrorErrors returned by GetResourceShares
ListPendingInvitationResourcesErrorErrors returned by ListPendingInvitationResources
ListPermissionsErrorErrors returned by ListPermissions
ListPrincipalsErrorErrors returned by ListPrincipals
ListResourceSharePermissionsErrorErrors returned by ListResourceSharePermissions
ListResourceTypesErrorErrors returned by ListResourceTypes
ListResourcesErrorErrors returned by ListResources
PromoteResourceShareCreatedFromPolicyErrorErrors returned by PromoteResourceShareCreatedFromPolicy
RejectResourceShareInvitationErrorErrors returned by RejectResourceShareInvitation
TagResourceErrorErrors returned by TagResource
UntagResourceErrorErrors returned by UntagResource
UpdateResourceShareErrorErrors returned by UpdateResourceShare
Traits
---
RamTrait representing the capabilities of the RAM API. RAM clients implement this trait.
Crate rusoto_ram
===
Use AWS Resource Access Manager to share AWS resources between AWS accounts. To share a resource, you create a resource share, associate the resource with the resource share, and specify the principals that can access the resources associated with the resource share. The following principals are supported: AWS accounts, organizational units (OU) from AWS Organizations, and organizations from AWS Organizations.
For more information, see the AWS Resource Access Manager User Guide.
If you’re using the service, you’re probably looking for RamClient and Ram.
Structs
---
AcceptResourceShareInvitationRequestAcceptResourceShareInvitationResponseAssociateResourceSharePermissionRequestAssociateResourceSharePermissionResponseAssociateResourceShareRequestAssociateResourceShareResponseCreateResourceShareRequestCreateResourceShareResponseDeleteResourceShareRequestDeleteResourceShareResponseDisassociateResourceSharePermissionRequestDisassociateResourceSharePermissionResponseDisassociateResourceShareRequestDisassociateResourceShareResponseEnableSharingWithAwsOrganizationRequestEnableSharingWithAwsOrganizationResponseGetPermissionRequestGetPermissionResponseGetResourcePoliciesRequestGetResourcePoliciesResponseGetResourceShareAssociationsRequestGetResourceShareAssociationsResponseGetResourceShareInvitationsRequestGetResourceShareInvitationsResponseGetResourceSharesRequestGetResourceSharesResponseListPendingInvitationResourcesRequestListPendingInvitationResourcesResponseListPermissionsRequestListPermissionsResponseListPrincipalsRequestListPrincipalsResponseListResourceSharePermissionsRequestListResourceSharePermissionsResponseListResourceTypesRequestListResourceTypesResponseListResourcesRequestListResourcesResponsePrincipalDescribes a principal for use with AWS Resource Access Manager.
PromoteResourceShareCreatedFromPolicyRequestPromoteResourceShareCreatedFromPolicyResponseRamClientA client for the RAM API.
RejectResourceShareInvitationRequestRejectResourceShareInvitationResponseResourceDescribes a resource associated with a resource share.
ResourceShareDescribes a resource share.
ResourceShareAssociationDescribes an association with a resource share.
ResourceShareInvitationDescribes an invitation to join a resource share.
ResourceSharePermissionDetailInformation about an AWS RAM permission.
ResourceSharePermissionSummaryInformation about a permission that is associated with a resource share.
ServiceNameAndResourceTypeInformation about the shareable resource types and the AWS services to which they belong.
TagInformation about a tag.
TagFilterUsed to filter information based on tags.
TagResourceRequestTagResourceResponseUntagResourceRequestUntagResourceResponseUpdateResourceShareRequestUpdateResourceShareResponseEnums
---
AcceptResourceShareInvitationErrorErrors returned by AcceptResourceShareInvitation
AssociateResourceShareErrorErrors returned by AssociateResourceShare
AssociateResourceSharePermissionErrorErrors returned by AssociateResourceSharePermission
CreateResourceShareErrorErrors returned by CreateResourceShare
DeleteResourceShareErrorErrors returned by DeleteResourceShare
DisassociateResourceShareErrorErrors returned by DisassociateResourceShare
DisassociateResourceSharePermissionErrorErrors returned by DisassociateResourceSharePermission
EnableSharingWithAwsOrganizationErrorErrors returned by EnableSharingWithAwsOrganization
GetPermissionErrorErrors returned by GetPermission
GetResourcePoliciesErrorErrors returned by GetResourcePolicies
GetResourceShareAssociationsErrorErrors returned by GetResourceShareAssociations
GetResourceShareInvitationsErrorErrors returned by GetResourceShareInvitations
GetResourceSharesErrorErrors returned by GetResourceShares
ListPendingInvitationResourcesErrorErrors returned by ListPendingInvitationResources
ListPermissionsErrorErrors returned by ListPermissions
ListPrincipalsErrorErrors returned by ListPrincipals
ListResourceSharePermissionsErrorErrors returned by ListResourceSharePermissions
ListResourceTypesErrorErrors returned by ListResourceTypes
ListResourcesErrorErrors returned by ListResources
PromoteResourceShareCreatedFromPolicyErrorErrors returned by PromoteResourceShareCreatedFromPolicy
RejectResourceShareInvitationErrorErrors returned by RejectResourceShareInvitation
TagResourceErrorErrors returned by TagResource
UntagResourceErrorErrors returned by UntagResource
UpdateResourceShareErrorErrors returned by UpdateResourceShare
Traits
---
RamTrait representing the capabilities of the RAM API. RAM clients implement this trait.
Struct rusoto_ram::RamClient
===
```
pub struct RamClient { /* private fields */ }
```
A client for the RAM API.
Implementations
---
source### impl RamClient
source#### pub fn new(region: Region) -> RamClient
Creates a client backed by the default tokio event loop.
The client will use the default credentials provider and tls client.
source#### pub fn new_with<P, D>( request_dispatcher: D, credentials_provider: P, region: Region) -> RamClient where P: ProvideAwsCredentials + Send + Sync + 'static, D: DispatchSignedRequest + Send + Sync + 'static,
source#### pub fn new_with_client(client: Client, region: Region) -> RamClient
Trait Implementations
---
source### impl Clone for RamClient
source#### fn clone(&self) -> RamClient
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Ram for RamClient
source#### fn accept_resource_share_invitation<'life0, 'async_trait>( &'life0 self, input: AcceptResourceShareInvitationRequest) -> Pin<Box<dyn Future<Output = Result<AcceptResourceShareInvitationResponse, RusotoError<AcceptResourceShareInvitationError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Accepts an invitation to a resource share from another AWS account.
source#### fn associate_resource_share<'life0, 'async_trait>( &'life0 self, input: AssociateResourceShareRequest) -> Pin<Box<dyn Future<Output = Result<AssociateResourceShareResponse, RusotoError<AssociateResourceShareError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Associates the specified resource share with the specified principals and resources.
source#### fn associate_resource_share_permission<'life0, 'async_trait>( &'life0 self, input: AssociateResourceSharePermissionRequest) -> Pin<Box<dyn Future<Output = Result<AssociateResourceSharePermissionResponse, RusotoError<AssociateResourceSharePermissionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Associates a permission with a resource share.
source#### fn create_resource_share<'life0, 'async_trait>( &'life0 self, input: CreateResourceShareRequest) -> Pin<Box<dyn Future<Output = Result<CreateResourceShareResponse, RusotoError<CreateResourceShareError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a resource share.
source#### fn delete_resource_share<'life0, 'async_trait>( &'life0 self, input: DeleteResourceShareRequest) -> Pin<Box<dyn Future<Output = Result<DeleteResourceShareResponse, RusotoError<DeleteResourceShareError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes the specified resource share.
source#### fn disassociate_resource_share<'life0, 'async_trait>( &'life0 self, input: DisassociateResourceShareRequest) -> Pin<Box<dyn Future<Output = Result<DisassociateResourceShareResponse, RusotoError<DisassociateResourceShareError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Disassociates the specified principals or resources from the specified resource share.
source#### fn disassociate_resource_share_permission<'life0, 'async_trait>( &'life0 self, input: DisassociateResourceSharePermissionRequest) -> Pin<Box<dyn Future<Output = Result<DisassociateResourceSharePermissionResponse, RusotoError<DisassociateResourceSharePermissionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Disassociates an AWS RAM permission from a resource share.
source#### fn enable_sharing_with_aws_organization<'life0, 'async_trait>( &'life0 self) -> Pin<Box<dyn Future<Output = Result<EnableSharingWithAwsOrganizationResponse, RusotoError<EnableSharingWithAwsOrganizationError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Enables resource sharing within your AWS Organization.
The caller must be the master account for the AWS Organization.
source#### fn get_permission<'life0, 'async_trait>( &'life0 self, input: GetPermissionRequest) -> Pin<Box<dyn Future<Output = Result<GetPermissionResponse, RusotoError<GetPermissionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Gets the contents of an AWS RAM permission in JSON format.
source#### fn get_resource_policies<'life0, 'async_trait>( &'life0 self, input: GetResourcePoliciesRequest) -> Pin<Box<dyn Future<Output = Result<GetResourcePoliciesResponse, RusotoError<GetResourcePoliciesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Gets the policies for the specified resources that you own and have shared.
source#### fn get_resource_share_associations<'life0, 'async_trait>( &'life0 self, input: GetResourceShareAssociationsRequest) -> Pin<Box<dyn Future<Output = Result<GetResourceShareAssociationsResponse, RusotoError<GetResourceShareAssociationsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Gets the resources or principals for the resource shares that you own.
source#### fn get_resource_share_invitations<'life0, 'async_trait>( &'life0 self, input: GetResourceShareInvitationsRequest) -> Pin<Box<dyn Future<Output = Result<GetResourceShareInvitationsResponse, RusotoError<GetResourceShareInvitationsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Gets the invitations that you have received for resource shares.
source#### fn get_resource_shares<'life0, 'async_trait>( &'life0 self, input: GetResourceSharesRequest) -> Pin<Box<dyn Future<Output = Result<GetResourceSharesResponse, RusotoError<GetResourceSharesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Gets the resource shares that you own or the resource shares that are shared with you.
source#### fn list_pending_invitation_resources<'life0, 'async_trait>( &'life0 self, input: ListPendingInvitationResourcesRequest) -> Pin<Box<dyn Future<Output = Result<ListPendingInvitationResourcesResponse, RusotoError<ListPendingInvitationResourcesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the resources in a resource share that is shared with you but that the invitation is still pending for.
source#### fn list_permissions<'life0, 'async_trait>( &'life0 self, input: ListPermissionsRequest) -> Pin<Box<dyn Future<Output = Result<ListPermissionsResponse, RusotoError<ListPermissionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the AWS RAM permissions.
source#### fn list_principals<'life0, 'async_trait>( &'life0 self, input: ListPrincipalsRequest) -> Pin<Box<dyn Future<Output = Result<ListPrincipalsResponse, RusotoError<ListPrincipalsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the principals that you have shared resources with or that have shared resources with you.
source#### fn list_resource_share_permissions<'life0, 'async_trait>( &'life0 self, input: ListResourceSharePermissionsRequest) -> Pin<Box<dyn Future<Output = Result<ListResourceSharePermissionsResponse, RusotoError<ListResourceSharePermissionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the AWS RAM permissions that are associated with a resource share.
source#### fn list_resource_types<'life0, 'async_trait>( &'life0 self, input: ListResourceTypesRequest) -> Pin<Box<dyn Future<Output = Result<ListResourceTypesResponse, RusotoError<ListResourceTypesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the shareable resource types supported by AWS RAM.
source#### fn list_resources<'life0, 'async_trait>( &'life0 self, input: ListResourcesRequest) -> Pin<Box<dyn Future<Output = Result<ListResourcesResponse, RusotoError<ListResourcesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the resources that you added to a resource shares or the resources that are shared with you.
source#### fn promote_resource_share_created_from_policy<'life0, 'async_trait>( &'life0 self, input: PromoteResourceShareCreatedFromPolicyRequest) -> Pin<Box<dyn Future<Output = Result<PromoteResourceShareCreatedFromPolicyResponse, RusotoError<PromoteResourceShareCreatedFromPolicyError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Resource shares that were created by attaching a policy to a resource are visible only to the resource share owner, and the resource share cannot be modified in AWS RAM.
Use this API action to promote the resource share. When you promote the resource share, it becomes:
* Visible to all principals that it is shared with.
* Modifiable in AWS RAM.
source#### fn reject_resource_share_invitation<'life0, 'async_trait>( &'life0 self, input: RejectResourceShareInvitationRequest) -> Pin<Box<dyn Future<Output = Result<RejectResourceShareInvitationResponse, RusotoError<RejectResourceShareInvitationError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Rejects an invitation to a resource share from another AWS account.
source#### fn tag_resource<'life0, 'async_trait>( &'life0 self, input: TagResourceRequest) -> Pin<Box<dyn Future<Output = Result<TagResourceResponse, RusotoError<TagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Adds the specified tags to the specified resource share that you own.
source#### fn untag_resource<'life0, 'async_trait>( &'life0 self, input: UntagResourceRequest) -> Pin<Box<dyn Future<Output = Result<UntagResourceResponse, RusotoError<UntagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Removes the specified tags from the specified resource share that you own.
source#### fn update_resource_share<'life0, 'async_trait>( &'life0 self, input: UpdateResourceShareRequest) -> Pin<Box<dyn Future<Output = Result<UpdateResourceShareResponse, RusotoError<UpdateResourceShareError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates the specified resource share that you own.
Auto Trait Implementations
---
### impl !RefUnwindSafe for RamClient
### impl Send for RamClient
### impl Sync for RamClient
### impl Unpin for RamClient
### impl !UnwindSafe for RamClient
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Trait rusoto_ram::Ram
===
```
pub trait Ram {
fn accept_resource_share_invitation<'life0, 'async_trait>(
&'life0 self,
input: AcceptResourceShareInvitationRequest
) -> Pin<Box<dyn Future<Output = Result<AcceptResourceShareInvitationResponse, RusotoError<AcceptResourceShareInvitationError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn associate_resource_share<'life0, 'async_trait>(
&'life0 self,
input: AssociateResourceShareRequest
) -> Pin<Box<dyn Future<Output = Result<AssociateResourceShareResponse, RusotoError<AssociateResourceShareError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn associate_resource_share_permission<'life0, 'async_trait>(
&'life0 self,
input: AssociateResourceSharePermissionRequest
) -> Pin<Box<dyn Future<Output = Result<AssociateResourceSharePermissionResponse, RusotoError<AssociateResourceSharePermissionError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_resource_share<'life0, 'async_trait>(
&'life0 self,
input: CreateResourceShareRequest
) -> Pin<Box<dyn Future<Output = Result<CreateResourceShareResponse, RusotoError<CreateResourceShareError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_resource_share<'life0, 'async_trait>(
&'life0 self,
input: DeleteResourceShareRequest
) -> Pin<Box<dyn Future<Output = Result<DeleteResourceShareResponse, RusotoError<DeleteResourceShareError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn disassociate_resource_share<'life0, 'async_trait>(
&'life0 self,
input: DisassociateResourceShareRequest
) -> Pin<Box<dyn Future<Output = Result<DisassociateResourceShareResponse, RusotoError<DisassociateResourceShareError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn disassociate_resource_share_permission<'life0, 'async_trait>(
&'life0 self,
input: DisassociateResourceSharePermissionRequest
) -> Pin<Box<dyn Future<Output = Result<DisassociateResourceSharePermissionResponse, RusotoError<DisassociateResourceSharePermissionError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn enable_sharing_with_aws_organization<'life0, 'async_trait>(
&'life0 self
) -> Pin<Box<dyn Future<Output = Result<EnableSharingWithAwsOrganizationResponse, RusotoError<EnableSharingWithAwsOrganizationError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn get_permission<'life0, 'async_trait>(
&'life0 self,
input: GetPermissionRequest
) -> Pin<Box<dyn Future<Output = Result<GetPermissionResponse, RusotoError<GetPermissionError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn get_resource_policies<'life0, 'async_trait>(
&'life0 self,
input: GetResourcePoliciesRequest
) -> Pin<Box<dyn Future<Output = Result<GetResourcePoliciesResponse, RusotoError<GetResourcePoliciesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn get_resource_share_associations<'life0, 'async_trait>(
&'life0 self,
input: GetResourceShareAssociationsRequest
) -> Pin<Box<dyn Future<Output = Result<GetResourceShareAssociationsResponse, RusotoError<GetResourceShareAssociationsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn get_resource_share_invitations<'life0, 'async_trait>(
&'life0 self,
input: GetResourceShareInvitationsRequest
) -> Pin<Box<dyn Future<Output = Result<GetResourceShareInvitationsResponse, RusotoError<GetResourceShareInvitationsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn get_resource_shares<'life0, 'async_trait>(
&'life0 self,
input: GetResourceSharesRequest
) -> Pin<Box<dyn Future<Output = Result<GetResourceSharesResponse, RusotoError<GetResourceSharesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_pending_invitation_resources<'life0, 'async_trait>(
&'life0 self,
input: ListPendingInvitationResourcesRequest
) -> Pin<Box<dyn Future<Output = Result<ListPendingInvitationResourcesResponse, RusotoError<ListPendingInvitationResourcesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_permissions<'life0, 'async_trait>(
&'life0 self,
input: ListPermissionsRequest
) -> Pin<Box<dyn Future<Output = Result<ListPermissionsResponse, RusotoError<ListPermissionsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_principals<'life0, 'async_trait>(
&'life0 self,
input: ListPrincipalsRequest
) -> Pin<Box<dyn Future<Output = Result<ListPrincipalsResponse, RusotoError<ListPrincipalsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_resource_share_permissions<'life0, 'async_trait>(
&'life0 self,
input: ListResourceSharePermissionsRequest
) -> Pin<Box<dyn Future<Output = Result<ListResourceSharePermissionsResponse, RusotoError<ListResourceSharePermissionsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_resource_types<'life0, 'async_trait>(
&'life0 self,
input: ListResourceTypesRequest
) -> Pin<Box<dyn Future<Output = Result<ListResourceTypesResponse, RusotoError<ListResourceTypesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_resources<'life0, 'async_trait>(
&'life0 self,
input: ListResourcesRequest
) -> Pin<Box<dyn Future<Output = Result<ListResourcesResponse, RusotoError<ListResourcesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn promote_resource_share_created_from_policy<'life0, 'async_trait>(
&'life0 self,
input: PromoteResourceShareCreatedFromPolicyRequest
) -> Pin<Box<dyn Future<Output = Result<PromoteResourceShareCreatedFromPolicyResponse, RusotoError<PromoteResourceShareCreatedFromPolicyError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn reject_resource_share_invitation<'life0, 'async_trait>(
&'life0 self,
input: RejectResourceShareInvitationRequest
) -> Pin<Box<dyn Future<Output = Result<RejectResourceShareInvitationResponse, RusotoError<RejectResourceShareInvitationError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn tag_resource<'life0, 'async_trait>(
&'life0 self,
input: TagResourceRequest
) -> Pin<Box<dyn Future<Output = Result<TagResourceResponse, RusotoError<TagResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn untag_resource<'life0, 'async_trait>(
&'life0 self,
input: UntagResourceRequest
) -> Pin<Box<dyn Future<Output = Result<UntagResourceResponse, RusotoError<UntagResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_resource_share<'life0, 'async_trait>(
&'life0 self,
input: UpdateResourceShareRequest
) -> Pin<Box<dyn Future<Output = Result<UpdateResourceShareResponse, RusotoError<UpdateResourceShareError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
}
```
Trait representing the capabilities of the RAM API. RAM clients implement this trait.
Required Methods
---
source#### fn accept_resource_share_invitation<'life0, 'async_trait>( &'life0 self, input: AcceptResourceShareInvitationRequest) -> Pin<Box<dyn Future<Output = Result<AcceptResourceShareInvitationResponse, RusotoError<AcceptResourceShareInvitationError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Accepts an invitation to a resource share from another AWS account.
source#### fn associate_resource_share<'life0, 'async_trait>( &'life0 self, input: AssociateResourceShareRequest) -> Pin<Box<dyn Future<Output = Result<AssociateResourceShareResponse, RusotoError<AssociateResourceShareError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Associates the specified resource share with the specified principals and resources.
source#### fn associate_resource_share_permission<'life0, 'async_trait>( &'life0 self, input: AssociateResourceSharePermissionRequest) -> Pin<Box<dyn Future<Output = Result<AssociateResourceSharePermissionResponse, RusotoError<AssociateResourceSharePermissionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Associates a permission with a resource share.
source#### fn create_resource_share<'life0, 'async_trait>( &'life0 self, input: CreateResourceShareRequest) -> Pin<Box<dyn Future<Output = Result<CreateResourceShareResponse, RusotoError<CreateResourceShareError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a resource share.
source#### fn delete_resource_share<'life0, 'async_trait>( &'life0 self, input: DeleteResourceShareRequest) -> Pin<Box<dyn Future<Output = Result<DeleteResourceShareResponse, RusotoError<DeleteResourceShareError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes the specified resource share.
source#### fn disassociate_resource_share<'life0, 'async_trait>( &'life0 self, input: DisassociateResourceShareRequest) -> Pin<Box<dyn Future<Output = Result<DisassociateResourceShareResponse, RusotoError<DisassociateResourceShareError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Disassociates the specified principals or resources from the specified resource share.
source#### fn disassociate_resource_share_permission<'life0, 'async_trait>( &'life0 self, input: DisassociateResourceSharePermissionRequest) -> Pin<Box<dyn Future<Output = Result<DisassociateResourceSharePermissionResponse, RusotoError<DisassociateResourceSharePermissionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Disassociates an AWS RAM permission from a resource share.
source#### fn enable_sharing_with_aws_organization<'life0, 'async_trait>( &'life0 self) -> Pin<Box<dyn Future<Output = Result<EnableSharingWithAwsOrganizationResponse, RusotoError<EnableSharingWithAwsOrganizationError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Enables resource sharing within your AWS Organization.
The caller must be the master account for the AWS Organization.
source#### fn get_permission<'life0, 'async_trait>( &'life0 self, input: GetPermissionRequest) -> Pin<Box<dyn Future<Output = Result<GetPermissionResponse, RusotoError<GetPermissionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Gets the contents of an AWS RAM permission in JSON format.
source#### fn get_resource_policies<'life0, 'async_trait>( &'life0 self, input: GetResourcePoliciesRequest) -> Pin<Box<dyn Future<Output = Result<GetResourcePoliciesResponse, RusotoError<GetResourcePoliciesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Gets the policies for the specified resources that you own and have shared.
source#### fn get_resource_share_associations<'life0, 'async_trait>( &'life0 self, input: GetResourceShareAssociationsRequest) -> Pin<Box<dyn Future<Output = Result<GetResourceShareAssociationsResponse, RusotoError<GetResourceShareAssociationsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Gets the resources or principals for the resource shares that you own.
source#### fn get_resource_share_invitations<'life0, 'async_trait>( &'life0 self, input: GetResourceShareInvitationsRequest) -> Pin<Box<dyn Future<Output = Result<GetResourceShareInvitationsResponse, RusotoError<GetResourceShareInvitationsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Gets the invitations that you have received for resource shares.
source#### fn get_resource_shares<'life0, 'async_trait>( &'life0 self, input: GetResourceSharesRequest) -> Pin<Box<dyn Future<Output = Result<GetResourceSharesResponse, RusotoError<GetResourceSharesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Gets the resource shares that you own or the resource shares that are shared with you.
source#### fn list_pending_invitation_resources<'life0, 'async_trait>( &'life0 self, input: ListPendingInvitationResourcesRequest) -> Pin<Box<dyn Future<Output = Result<ListPendingInvitationResourcesResponse, RusotoError<ListPendingInvitationResourcesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the resources in a resource share that is shared with you but that the invitation is still pending for.
source#### fn list_permissions<'life0, 'async_trait>( &'life0 self, input: ListPermissionsRequest) -> Pin<Box<dyn Future<Output = Result<ListPermissionsResponse, RusotoError<ListPermissionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the AWS RAM permissions.
source#### fn list_principals<'life0, 'async_trait>( &'life0 self, input: ListPrincipalsRequest) -> Pin<Box<dyn Future<Output = Result<ListPrincipalsResponse, RusotoError<ListPrincipalsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the principals that you have shared resources with or that have shared resources with you.
source#### fn list_resource_share_permissions<'life0, 'async_trait>( &'life0 self, input: ListResourceSharePermissionsRequest) -> Pin<Box<dyn Future<Output = Result<ListResourceSharePermissionsResponse, RusotoError<ListResourceSharePermissionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the AWS RAM permissions that are associated with a resource share.
source#### fn list_resource_types<'life0, 'async_trait>( &'life0 self, input: ListResourceTypesRequest) -> Pin<Box<dyn Future<Output = Result<ListResourceTypesResponse, RusotoError<ListResourceTypesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the shareable resource types supported by AWS RAM.
source#### fn list_resources<'life0, 'async_trait>( &'life0 self, input: ListResourcesRequest) -> Pin<Box<dyn Future<Output = Result<ListResourcesResponse, RusotoError<ListResourcesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists the resources that you added to a resource shares or the resources that are shared with you.
source#### fn promote_resource_share_created_from_policy<'life0, 'async_trait>( &'life0 self, input: PromoteResourceShareCreatedFromPolicyRequest) -> Pin<Box<dyn Future<Output = Result<PromoteResourceShareCreatedFromPolicyResponse, RusotoError<PromoteResourceShareCreatedFromPolicyError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Resource shares that were created by attaching a policy to a resource are visible only to the resource share owner, and the resource share cannot be modified in AWS RAM.
Use this API action to promote the resource share. When you promote the resource share, it becomes:
* Visible to all principals that it is shared with.
* Modifiable in AWS RAM.
source#### fn reject_resource_share_invitation<'life0, 'async_trait>( &'life0 self, input: RejectResourceShareInvitationRequest) -> Pin<Box<dyn Future<Output = Result<RejectResourceShareInvitationResponse, RusotoError<RejectResourceShareInvitationError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Rejects an invitation to a resource share from another AWS account.
source#### fn tag_resource<'life0, 'async_trait>( &'life0 self, input: TagResourceRequest) -> Pin<Box<dyn Future<Output = Result<TagResourceResponse, RusotoError<TagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Adds the specified tags to the specified resource share that you own.
source#### fn untag_resource<'life0, 'async_trait>( &'life0 self, input: UntagResourceRequest) -> Pin<Box<dyn Future<Output = Result<UntagResourceResponse, RusotoError<UntagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Removes the specified tags from the specified resource share that you own.
source#### fn update_resource_share<'life0, 'async_trait>( &'life0 self, input: UpdateResourceShareRequest) -> Pin<Box<dyn Future<Output = Result<UpdateResourceShareResponse, RusotoError<UpdateResourceShareError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates the specified resource share that you own.
Implementors
---
source### impl Ram for RamClient
Struct rusoto_ram::AcceptResourceShareInvitationRequest
===
```
pub struct AcceptResourceShareInvitationRequest {
pub client_token: Option<String>,
pub resource_share_invitation_arn: String,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`resource_share_invitation_arn: String`The Amazon Resource Name (ARN) of the invitation.
Trait Implementations
---
source### impl Clone for AcceptResourceShareInvitationRequest
source#### fn clone(&self) -> AcceptResourceShareInvitationRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AcceptResourceShareInvitationRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AcceptResourceShareInvitationRequest
source#### fn default() -> AcceptResourceShareInvitationRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<AcceptResourceShareInvitationRequest> for AcceptResourceShareInvitationRequest
source#### fn eq(&self, other: &AcceptResourceShareInvitationRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AcceptResourceShareInvitationRequest) -> bool
This method tests for `!=`.
source### impl Serialize for AcceptResourceShareInvitationRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for AcceptResourceShareInvitationRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for AcceptResourceShareInvitationRequest
### impl Send for AcceptResourceShareInvitationRequest
### impl Sync for AcceptResourceShareInvitationRequest
### impl Unpin for AcceptResourceShareInvitationRequest
### impl UnwindSafe for AcceptResourceShareInvitationRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::AcceptResourceShareInvitationResponse
===
```
pub struct AcceptResourceShareInvitationResponse {
pub client_token: Option<String>,
pub resource_share_invitation: Option<ResourceShareInvitation>,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`resource_share_invitation: Option<ResourceShareInvitation>`Information about the invitation.
Trait Implementations
---
source### impl Clone for AcceptResourceShareInvitationResponse
source#### fn clone(&self) -> AcceptResourceShareInvitationResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AcceptResourceShareInvitationResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AcceptResourceShareInvitationResponse
source#### fn default() -> AcceptResourceShareInvitationResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for AcceptResourceShareInvitationResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<AcceptResourceShareInvitationResponse> for AcceptResourceShareInvitationResponse
source#### fn eq(&self, other: &AcceptResourceShareInvitationResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AcceptResourceShareInvitationResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AcceptResourceShareInvitationResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for AcceptResourceShareInvitationResponse
### impl Send for AcceptResourceShareInvitationResponse
### impl Sync for AcceptResourceShareInvitationResponse
### impl Unpin for AcceptResourceShareInvitationResponse
### impl UnwindSafe for AcceptResourceShareInvitationResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::AssociateResourceSharePermissionRequest
===
```
pub struct AssociateResourceSharePermissionRequest {
pub client_token: Option<String>,
pub permission_arn: String,
pub permission_version: Option<i64>,
pub replace: Option<bool>,
pub resource_share_arn: String,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`permission_arn: String`The Amazon Resource Name (ARN) of the AWS RAM permissions to associate with the resource share.
`permission_version: Option<i64>`The version of the AWS RAM permissions to associate with the resource share.
`replace: Option<bool>`Indicates whether the permission should replace the permissions that are currently associated with the resource share. Use `true` to replace the current permissions. Use `false` to add the permission to the current permission.
`resource_share_arn: String`The Amazon Resource Name (ARN) of the resource share.
Trait Implementations
---
source### impl Clone for AssociateResourceSharePermissionRequest
source#### fn clone(&self) -> AssociateResourceSharePermissionRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AssociateResourceSharePermissionRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AssociateResourceSharePermissionRequest
source#### fn default() -> AssociateResourceSharePermissionRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<AssociateResourceSharePermissionRequest> for AssociateResourceSharePermissionRequest
source#### fn eq(&self, other: &AssociateResourceSharePermissionRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AssociateResourceSharePermissionRequest) -> bool
This method tests for `!=`.
source### impl Serialize for AssociateResourceSharePermissionRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for AssociateResourceSharePermissionRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for AssociateResourceSharePermissionRequest
### impl Send for AssociateResourceSharePermissionRequest
### impl Sync for AssociateResourceSharePermissionRequest
### impl Unpin for AssociateResourceSharePermissionRequest
### impl UnwindSafe for AssociateResourceSharePermissionRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::AssociateResourceSharePermissionResponse
===
```
pub struct AssociateResourceSharePermissionResponse {
pub client_token: Option<String>,
pub return_value: Option<bool>,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`return_value: Option<bool>`Indicates whether the request succeeded.
Trait Implementations
---
source### impl Clone for AssociateResourceSharePermissionResponse
source#### fn clone(&self) -> AssociateResourceSharePermissionResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AssociateResourceSharePermissionResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AssociateResourceSharePermissionResponse
source#### fn default() -> AssociateResourceSharePermissionResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for AssociateResourceSharePermissionResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<AssociateResourceSharePermissionResponse> for AssociateResourceSharePermissionResponse
source#### fn eq(&self, other: &AssociateResourceSharePermissionResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AssociateResourceSharePermissionResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AssociateResourceSharePermissionResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for AssociateResourceSharePermissionResponse
### impl Send for AssociateResourceSharePermissionResponse
### impl Sync for AssociateResourceSharePermissionResponse
### impl Unpin for AssociateResourceSharePermissionResponse
### impl UnwindSafe for AssociateResourceSharePermissionResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::AssociateResourceShareRequest
===
```
pub struct AssociateResourceShareRequest {
pub client_token: Option<String>,
pub principals: Option<Vec<String>>,
pub resource_arns: Option<Vec<String>>,
pub resource_share_arn: String,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`principals: Option<Vec<String>>`The principals to associate with the resource share. The possible values are IDs of AWS accounts, and the ARNs of organizational units (OU) or organizations from AWS Organizations.
`resource_arns: Option<Vec<String>>`The Amazon Resource Names (ARN) of the resources.
`resource_share_arn: String`The Amazon Resource Name (ARN) of the resource share.
Trait Implementations
---
source### impl Clone for AssociateResourceShareRequest
source#### fn clone(&self) -> AssociateResourceShareRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AssociateResourceShareRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AssociateResourceShareRequest
source#### fn default() -> AssociateResourceShareRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<AssociateResourceShareRequest> for AssociateResourceShareRequest
source#### fn eq(&self, other: &AssociateResourceShareRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AssociateResourceShareRequest) -> bool
This method tests for `!=`.
source### impl Serialize for AssociateResourceShareRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for AssociateResourceShareRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for AssociateResourceShareRequest
### impl Send for AssociateResourceShareRequest
### impl Sync for AssociateResourceShareRequest
### impl Unpin for AssociateResourceShareRequest
### impl UnwindSafe for AssociateResourceShareRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::AssociateResourceShareResponse
===
```
pub struct AssociateResourceShareResponse {
pub client_token: Option<String>,
pub resource_share_associations: Option<Vec<ResourceShareAssociation>>,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`resource_share_associations: Option<Vec<ResourceShareAssociation>>`Information about the associations.
Trait Implementations
---
source### impl Clone for AssociateResourceShareResponse
source#### fn clone(&self) -> AssociateResourceShareResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AssociateResourceShareResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AssociateResourceShareResponse
source#### fn default() -> AssociateResourceShareResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for AssociateResourceShareResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<AssociateResourceShareResponse> for AssociateResourceShareResponse
source#### fn eq(&self, other: &AssociateResourceShareResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AssociateResourceShareResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AssociateResourceShareResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for AssociateResourceShareResponse
### impl Send for AssociateResourceShareResponse
### impl Sync for AssociateResourceShareResponse
### impl Unpin for AssociateResourceShareResponse
### impl UnwindSafe for AssociateResourceShareResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::CreateResourceShareRequest
===
```
pub struct CreateResourceShareRequest {
pub allow_external_principals: Option<bool>,
pub client_token: Option<String>,
pub name: String,
pub permission_arns: Option<Vec<String>>,
pub principals: Option<Vec<String>>,
pub resource_arns: Option<Vec<String>>,
pub tags: Option<Vec<Tag>>,
}
```
Fields
---
`allow_external_principals: Option<bool>`Indicates whether principals outside your AWS organization can be associated with a resource share.
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`name: String`The name of the resource share.
`permission_arns: Option<Vec<String>>`The ARNs of the permissions to associate with the resource share. If you do not specify an ARN for the permission, AWS RAM automatically attaches the default version of the permission for each resource type.
`principals: Option<Vec<String>>`The principals to associate with the resource share. The possible values are IDs of AWS accounts, the ARN of an OU or organization from AWS Organizations.
`resource_arns: Option<Vec<String>>`The Amazon Resource Names (ARN) of the resources to associate with the resource share.
`tags: Option<Vec<Tag>>`One or more tags.
Trait Implementations
---
source### impl Clone for CreateResourceShareRequest
source#### fn clone(&self) -> CreateResourceShareRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateResourceShareRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateResourceShareRequest
source#### fn default() -> CreateResourceShareRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateResourceShareRequest> for CreateResourceShareRequest
source#### fn eq(&self, other: &CreateResourceShareRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateResourceShareRequest) -> bool
This method tests for `!=`.
source### impl Serialize for CreateResourceShareRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateResourceShareRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateResourceShareRequest
### impl Send for CreateResourceShareRequest
### impl Sync for CreateResourceShareRequest
### impl Unpin for CreateResourceShareRequest
### impl UnwindSafe for CreateResourceShareRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::CreateResourceShareResponse
===
```
pub struct CreateResourceShareResponse {
pub client_token: Option<String>,
pub resource_share: Option<ResourceShare>,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`resource_share: Option<ResourceShare>`Information about the resource share.
Trait Implementations
---
source### impl Clone for CreateResourceShareResponse
source#### fn clone(&self) -> CreateResourceShareResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateResourceShareResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateResourceShareResponse
source#### fn default() -> CreateResourceShareResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateResourceShareResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateResourceShareResponse> for CreateResourceShareResponse
source#### fn eq(&self, other: &CreateResourceShareResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateResourceShareResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateResourceShareResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateResourceShareResponse
### impl Send for CreateResourceShareResponse
### impl Sync for CreateResourceShareResponse
### impl Unpin for CreateResourceShareResponse
### impl UnwindSafe for CreateResourceShareResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::DeleteResourceShareRequest
===
```
pub struct DeleteResourceShareRequest {
pub client_token: Option<String>,
pub resource_share_arn: String,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`resource_share_arn: String`The Amazon Resource Name (ARN) of the resource share.
Trait Implementations
---
source### impl Clone for DeleteResourceShareRequest
source#### fn clone(&self) -> DeleteResourceShareRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteResourceShareRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteResourceShareRequest
source#### fn default() -> DeleteResourceShareRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteResourceShareRequest> for DeleteResourceShareRequest
source#### fn eq(&self, other: &DeleteResourceShareRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteResourceShareRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteResourceShareRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteResourceShareRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteResourceShareRequest
### impl Send for DeleteResourceShareRequest
### impl Sync for DeleteResourceShareRequest
### impl Unpin for DeleteResourceShareRequest
### impl UnwindSafe for DeleteResourceShareRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::DeleteResourceShareResponse
===
```
pub struct DeleteResourceShareResponse {
pub client_token: Option<String>,
pub return_value: Option<bool>,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`return_value: Option<bool>`Indicates whether the request succeeded.
Trait Implementations
---
source### impl Clone for DeleteResourceShareResponse
source#### fn clone(&self) -> DeleteResourceShareResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteResourceShareResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteResourceShareResponse
source#### fn default() -> DeleteResourceShareResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DeleteResourceShareResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DeleteResourceShareResponse> for DeleteResourceShareResponse
source#### fn eq(&self, other: &DeleteResourceShareResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteResourceShareResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteResourceShareResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteResourceShareResponse
### impl Send for DeleteResourceShareResponse
### impl Sync for DeleteResourceShareResponse
### impl Unpin for DeleteResourceShareResponse
### impl UnwindSafe for DeleteResourceShareResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::DisassociateResourceSharePermissionRequest
===
```
pub struct DisassociateResourceSharePermissionRequest {
pub client_token: Option<String>,
pub permission_arn: String,
pub resource_share_arn: String,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`permission_arn: String`The ARN of the permission to disassociate from the resource share.
`resource_share_arn: String`The Amazon Resource Name (ARN) of the resource share.
Trait Implementations
---
source### impl Clone for DisassociateResourceSharePermissionRequest
source#### fn clone(&self) -> DisassociateResourceSharePermissionRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DisassociateResourceSharePermissionRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DisassociateResourceSharePermissionRequest
source#### fn default() -> DisassociateResourceSharePermissionRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DisassociateResourceSharePermissionRequest> for DisassociateResourceSharePermissionRequest
source#### fn eq(&self, other: &DisassociateResourceSharePermissionRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DisassociateResourceSharePermissionRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DisassociateResourceSharePermissionRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DisassociateResourceSharePermissionRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DisassociateResourceSharePermissionRequest
### impl Send for DisassociateResourceSharePermissionRequest
### impl Sync for DisassociateResourceSharePermissionRequest
### impl Unpin for DisassociateResourceSharePermissionRequest
### impl UnwindSafe for DisassociateResourceSharePermissionRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::DisassociateResourceSharePermissionResponse
===
```
pub struct DisassociateResourceSharePermissionResponse {
pub client_token: Option<String>,
pub return_value: Option<bool>,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`return_value: Option<bool>`Indicates whether the request succeeded.
Trait Implementations
---
source### impl Clone for DisassociateResourceSharePermissionResponse
source#### fn clone(&self) -> DisassociateResourceSharePermissionResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DisassociateResourceSharePermissionResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DisassociateResourceSharePermissionResponse
source#### fn default() -> DisassociateResourceSharePermissionResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DisassociateResourceSharePermissionResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DisassociateResourceSharePermissionResponse> for DisassociateResourceSharePermissionResponse
source#### fn eq(&self, other: &DisassociateResourceSharePermissionResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DisassociateResourceSharePermissionResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DisassociateResourceSharePermissionResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for DisassociateResourceSharePermissionResponse
### impl Send for DisassociateResourceSharePermissionResponse
### impl Sync for DisassociateResourceSharePermissionResponse
### impl Unpin for DisassociateResourceSharePermissionResponse
### impl UnwindSafe for DisassociateResourceSharePermissionResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::DisassociateResourceShareRequest
===
```
pub struct DisassociateResourceShareRequest {
pub client_token: Option<String>,
pub principals: Option<Vec<String>>,
pub resource_arns: Option<Vec<String>>,
pub resource_share_arn: String,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`principals: Option<Vec<String>>`The principals.
`resource_arns: Option<Vec<String>>`The Amazon Resource Names (ARNs) of the resources.
`resource_share_arn: String`The Amazon Resource Name (ARN) of the resource share.
Trait Implementations
---
source### impl Clone for DisassociateResourceShareRequest
source#### fn clone(&self) -> DisassociateResourceShareRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DisassociateResourceShareRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DisassociateResourceShareRequest
source#### fn default() -> DisassociateResourceShareRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DisassociateResourceShareRequest> for DisassociateResourceShareRequest
source#### fn eq(&self, other: &DisassociateResourceShareRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DisassociateResourceShareRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DisassociateResourceShareRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DisassociateResourceShareRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DisassociateResourceShareRequest
### impl Send for DisassociateResourceShareRequest
### impl Sync for DisassociateResourceShareRequest
### impl Unpin for DisassociateResourceShareRequest
### impl UnwindSafe for DisassociateResourceShareRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::DisassociateResourceShareResponse
===
```
pub struct DisassociateResourceShareResponse {
pub client_token: Option<String>,
pub resource_share_associations: Option<Vec<ResourceShareAssociation>>,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`resource_share_associations: Option<Vec<ResourceShareAssociation>>`Information about the associations.
Trait Implementations
---
source### impl Clone for DisassociateResourceShareResponse
source#### fn clone(&self) -> DisassociateResourceShareResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DisassociateResourceShareResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DisassociateResourceShareResponse
source#### fn default() -> DisassociateResourceShareResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DisassociateResourceShareResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DisassociateResourceShareResponse> for DisassociateResourceShareResponse
source#### fn eq(&self, other: &DisassociateResourceShareResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DisassociateResourceShareResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DisassociateResourceShareResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for DisassociateResourceShareResponse
### impl Send for DisassociateResourceShareResponse
### impl Sync for DisassociateResourceShareResponse
### impl Unpin for DisassociateResourceShareResponse
### impl UnwindSafe for DisassociateResourceShareResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::EnableSharingWithAwsOrganizationRequest
===
```
pub struct EnableSharingWithAwsOrganizationRequest {}
```
Trait Implementations
---
source### impl Clone for EnableSharingWithAwsOrganizationRequest
source#### fn clone(&self) -> EnableSharingWithAwsOrganizationRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for EnableSharingWithAwsOrganizationRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for EnableSharingWithAwsOrganizationRequest
source#### fn default() -> EnableSharingWithAwsOrganizationRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<EnableSharingWithAwsOrganizationRequest> for EnableSharingWithAwsOrganizationRequest
source#### fn eq(&self, other: &EnableSharingWithAwsOrganizationRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl Serialize for EnableSharingWithAwsOrganizationRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for EnableSharingWithAwsOrganizationRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for EnableSharingWithAwsOrganizationRequest
### impl Send for EnableSharingWithAwsOrganizationRequest
### impl Sync for EnableSharingWithAwsOrganizationRequest
### impl Unpin for EnableSharingWithAwsOrganizationRequest
### impl UnwindSafe for EnableSharingWithAwsOrganizationRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::EnableSharingWithAwsOrganizationResponse
===
```
pub struct EnableSharingWithAwsOrganizationResponse {
pub return_value: Option<bool>,
}
```
Fields
---
`return_value: Option<bool>`Indicates whether the request succeeded.
Trait Implementations
---
source### impl Clone for EnableSharingWithAwsOrganizationResponse
source#### fn clone(&self) -> EnableSharingWithAwsOrganizationResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for EnableSharingWithAwsOrganizationResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for EnableSharingWithAwsOrganizationResponse
source#### fn default() -> EnableSharingWithAwsOrganizationResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for EnableSharingWithAwsOrganizationResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<EnableSharingWithAwsOrganizationResponse> for EnableSharingWithAwsOrganizationResponse
source#### fn eq(&self, other: &EnableSharingWithAwsOrganizationResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &EnableSharingWithAwsOrganizationResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for EnableSharingWithAwsOrganizationResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for EnableSharingWithAwsOrganizationResponse
### impl Send for EnableSharingWithAwsOrganizationResponse
### impl Sync for EnableSharingWithAwsOrganizationResponse
### impl Unpin for EnableSharingWithAwsOrganizationResponse
### impl UnwindSafe for EnableSharingWithAwsOrganizationResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::GetPermissionRequest
===
```
pub struct GetPermissionRequest {
pub permission_arn: String,
pub permission_version: Option<i64>,
}
```
Fields
---
`permission_arn: String`The ARN of the permission.
`permission_version: Option<i64>`The identifier for the version of the permission.
Trait Implementations
---
source### impl Clone for GetPermissionRequest
source#### fn clone(&self) -> GetPermissionRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetPermissionRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetPermissionRequest
source#### fn default() -> GetPermissionRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<GetPermissionRequest> for GetPermissionRequest
source#### fn eq(&self, other: &GetPermissionRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetPermissionRequest) -> bool
This method tests for `!=`.
source### impl Serialize for GetPermissionRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GetPermissionRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for GetPermissionRequest
### impl Send for GetPermissionRequest
### impl Sync for GetPermissionRequest
### impl Unpin for GetPermissionRequest
### impl UnwindSafe for GetPermissionRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::GetPermissionResponse
===
```
pub struct GetPermissionResponse {
pub permission: Option<ResourceSharePermissionDetail>,
}
```
Fields
---
`permission: Option<ResourceSharePermissionDetail>`Information about the permission.
Trait Implementations
---
source### impl Clone for GetPermissionResponse
source#### fn clone(&self) -> GetPermissionResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetPermissionResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetPermissionResponse
source#### fn default() -> GetPermissionResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GetPermissionResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GetPermissionResponse> for GetPermissionResponse
source#### fn eq(&self, other: &GetPermissionResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetPermissionResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetPermissionResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for GetPermissionResponse
### impl Send for GetPermissionResponse
### impl Sync for GetPermissionResponse
### impl Unpin for GetPermissionResponse
### impl UnwindSafe for GetPermissionResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::GetResourcePoliciesRequest
===
```
pub struct GetResourcePoliciesRequest {
pub max_results: Option<i64>,
pub next_token: Option<String>,
pub principal: Option<String>,
pub resource_arns: Vec<String>,
}
```
Fields
---
`max_results: Option<i64>`The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned `nextToken` value.
`next_token: Option<String>`The token for the next page of results.
`principal: Option<String>`The principal.
`resource_arns: Vec<String>`The Amazon Resource Names (ARN) of the resources.
Trait Implementations
---
source### impl Clone for GetResourcePoliciesRequest
source#### fn clone(&self) -> GetResourcePoliciesRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetResourcePoliciesRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetResourcePoliciesRequest
source#### fn default() -> GetResourcePoliciesRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<GetResourcePoliciesRequest> for GetResourcePoliciesRequest
source#### fn eq(&self, other: &GetResourcePoliciesRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourcePoliciesRequest) -> bool
This method tests for `!=`.
source### impl Serialize for GetResourcePoliciesRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GetResourcePoliciesRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourcePoliciesRequest
### impl Send for GetResourcePoliciesRequest
### impl Sync for GetResourcePoliciesRequest
### impl Unpin for GetResourcePoliciesRequest
### impl UnwindSafe for GetResourcePoliciesRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::GetResourcePoliciesResponse
===
```
pub struct GetResourcePoliciesResponse {
pub next_token: Option<String>,
pub policies: Option<Vec<String>>,
}
```
Fields
---
`next_token: Option<String>`The token to use to retrieve the next page of results. This value is `null` when there are no more results to return.
`policies: Option<Vec<String>>`A key policy document, in JSON format.
Trait Implementations
---
source### impl Clone for GetResourcePoliciesResponse
source#### fn clone(&self) -> GetResourcePoliciesResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetResourcePoliciesResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetResourcePoliciesResponse
source#### fn default() -> GetResourcePoliciesResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GetResourcePoliciesResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GetResourcePoliciesResponse> for GetResourcePoliciesResponse
source#### fn eq(&self, other: &GetResourcePoliciesResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourcePoliciesResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetResourcePoliciesResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourcePoliciesResponse
### impl Send for GetResourcePoliciesResponse
### impl Sync for GetResourcePoliciesResponse
### impl Unpin for GetResourcePoliciesResponse
### impl UnwindSafe for GetResourcePoliciesResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::GetResourceShareAssociationsRequest
===
```
pub struct GetResourceShareAssociationsRequest {
pub association_status: Option<String>,
pub association_type: String,
pub max_results: Option<i64>,
pub next_token: Option<String>,
pub principal: Option<String>,
pub resource_arn: Option<String>,
pub resource_share_arns: Option<Vec<String>>,
}
```
Fields
---
`association_status: Option<String>`The association status.
`association_type: String`The association type. Specify `PRINCIPAL` to list the principals that are associated with the specified resource share. Specify `RESOURCE` to list the resources that are associated with the specified resource share.
`max_results: Option<i64>`The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned `nextToken` value.
`next_token: Option<String>`The token for the next page of results.
`principal: Option<String>`The principal. You cannot specify this parameter if the association type is `RESOURCE`.
`resource_arn: Option<String>`The Amazon Resource Name (ARN) of the resource. You cannot specify this parameter if the association type is `PRINCIPAL`.
`resource_share_arns: Option<Vec<String>>`The Amazon Resource Names (ARN) of the resource shares.
Trait Implementations
---
source### impl Clone for GetResourceShareAssociationsRequest
source#### fn clone(&self) -> GetResourceShareAssociationsRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetResourceShareAssociationsRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetResourceShareAssociationsRequest
source#### fn default() -> GetResourceShareAssociationsRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<GetResourceShareAssociationsRequest> for GetResourceShareAssociationsRequest
source#### fn eq(&self, other: &GetResourceShareAssociationsRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourceShareAssociationsRequest) -> bool
This method tests for `!=`.
source### impl Serialize for GetResourceShareAssociationsRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GetResourceShareAssociationsRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourceShareAssociationsRequest
### impl Send for GetResourceShareAssociationsRequest
### impl Sync for GetResourceShareAssociationsRequest
### impl Unpin for GetResourceShareAssociationsRequest
### impl UnwindSafe for GetResourceShareAssociationsRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::GetResourceShareAssociationsResponse
===
```
pub struct GetResourceShareAssociationsResponse {
pub next_token: Option<String>,
pub resource_share_associations: Option<Vec<ResourceShareAssociation>>,
}
```
Fields
---
`next_token: Option<String>`The token to use to retrieve the next page of results. This value is `null` when there are no more results to return.
`resource_share_associations: Option<Vec<ResourceShareAssociation>>`Information about the associations.
Trait Implementations
---
source### impl Clone for GetResourceShareAssociationsResponse
source#### fn clone(&self) -> GetResourceShareAssociationsResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetResourceShareAssociationsResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetResourceShareAssociationsResponse
source#### fn default() -> GetResourceShareAssociationsResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GetResourceShareAssociationsResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GetResourceShareAssociationsResponse> for GetResourceShareAssociationsResponse
source#### fn eq(&self, other: &GetResourceShareAssociationsResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourceShareAssociationsResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetResourceShareAssociationsResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourceShareAssociationsResponse
### impl Send for GetResourceShareAssociationsResponse
### impl Sync for GetResourceShareAssociationsResponse
### impl Unpin for GetResourceShareAssociationsResponse
### impl UnwindSafe for GetResourceShareAssociationsResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::GetResourceShareInvitationsRequest
===
```
pub struct GetResourceShareInvitationsRequest {
pub max_results: Option<i64>,
pub next_token: Option<String>,
pub resource_share_arns: Option<Vec<String>>,
pub resource_share_invitation_arns: Option<Vec<String>>,
}
```
Fields
---
`max_results: Option<i64>`The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned `nextToken` value.
`next_token: Option<String>`The token for the next page of results.
`resource_share_arns: Option<Vec<String>>`The Amazon Resource Names (ARN) of the resource shares.
`resource_share_invitation_arns: Option<Vec<String>>`The Amazon Resource Names (ARN) of the invitations.
Trait Implementations
---
source### impl Clone for GetResourceShareInvitationsRequest
source#### fn clone(&self) -> GetResourceShareInvitationsRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetResourceShareInvitationsRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetResourceShareInvitationsRequest
source#### fn default() -> GetResourceShareInvitationsRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<GetResourceShareInvitationsRequest> for GetResourceShareInvitationsRequest
source#### fn eq(&self, other: &GetResourceShareInvitationsRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourceShareInvitationsRequest) -> bool
This method tests for `!=`.
source### impl Serialize for GetResourceShareInvitationsRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GetResourceShareInvitationsRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourceShareInvitationsRequest
### impl Send for GetResourceShareInvitationsRequest
### impl Sync for GetResourceShareInvitationsRequest
### impl Unpin for GetResourceShareInvitationsRequest
### impl UnwindSafe for GetResourceShareInvitationsRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::GetResourceShareInvitationsResponse
===
```
pub struct GetResourceShareInvitationsResponse {
pub next_token: Option<String>,
pub resource_share_invitations: Option<Vec<ResourceShareInvitation>>,
}
```
Fields
---
`next_token: Option<String>`The token to use to retrieve the next page of results. This value is `null` when there are no more results to return.
`resource_share_invitations: Option<Vec<ResourceShareInvitation>>`Information about the invitations.
Trait Implementations
---
source### impl Clone for GetResourceShareInvitationsResponse
source#### fn clone(&self) -> GetResourceShareInvitationsResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetResourceShareInvitationsResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetResourceShareInvitationsResponse
source#### fn default() -> GetResourceShareInvitationsResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GetResourceShareInvitationsResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GetResourceShareInvitationsResponse> for GetResourceShareInvitationsResponse
source#### fn eq(&self, other: &GetResourceShareInvitationsResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourceShareInvitationsResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetResourceShareInvitationsResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourceShareInvitationsResponse
### impl Send for GetResourceShareInvitationsResponse
### impl Sync for GetResourceShareInvitationsResponse
### impl Unpin for GetResourceShareInvitationsResponse
### impl UnwindSafe for GetResourceShareInvitationsResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::GetResourceSharesRequest
===
```
pub struct GetResourceSharesRequest {
pub max_results: Option<i64>,
pub name: Option<String>,
pub next_token: Option<String>,
pub permission_arn: Option<String>,
pub resource_owner: String,
pub resource_share_arns: Option<Vec<String>>,
pub resource_share_status: Option<String>,
pub tag_filters: Option<Vec<TagFilter>>,
}
```
Fields
---
`max_results: Option<i64>`The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned `nextToken` value.
`name: Option<String>`The name of the resource share.
`next_token: Option<String>`The token for the next page of results.
`permission_arn: Option<String>`The Amazon Resource Name (ARN) of the AWS RAM permission that is associated with the resource share.
`resource_owner: String`The type of owner.
`resource_share_arns: Option<Vec<String>>`The ARNs of the resource shares.
`resource_share_status: Option<String>`The status of the resource share.
`tag_filters: Option<Vec<TagFilter>>`One or more tag filters.
Trait Implementations
---
source### impl Clone for GetResourceSharesRequest
source#### fn clone(&self) -> GetResourceSharesRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetResourceSharesRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetResourceSharesRequest
source#### fn default() -> GetResourceSharesRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<GetResourceSharesRequest> for GetResourceSharesRequest
source#### fn eq(&self, other: &GetResourceSharesRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourceSharesRequest) -> bool
This method tests for `!=`.
source### impl Serialize for GetResourceSharesRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GetResourceSharesRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourceSharesRequest
### impl Send for GetResourceSharesRequest
### impl Sync for GetResourceSharesRequest
### impl Unpin for GetResourceSharesRequest
### impl UnwindSafe for GetResourceSharesRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::GetResourceSharesResponse
===
```
pub struct GetResourceSharesResponse {
pub next_token: Option<String>,
pub resource_shares: Option<Vec<ResourceShare>>,
}
```
Fields
---
`next_token: Option<String>`The token to use to retrieve the next page of results. This value is `null` when there are no more results to return.
`resource_shares: Option<Vec<ResourceShare>>`Information about the resource shares.
Trait Implementations
---
source### impl Clone for GetResourceSharesResponse
source#### fn clone(&self) -> GetResourceSharesResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetResourceSharesResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetResourceSharesResponse
source#### fn default() -> GetResourceSharesResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GetResourceSharesResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GetResourceSharesResponse> for GetResourceSharesResponse
source#### fn eq(&self, other: &GetResourceSharesResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourceSharesResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetResourceSharesResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourceSharesResponse
### impl Send for GetResourceSharesResponse
### impl Sync for GetResourceSharesResponse
### impl Unpin for GetResourceSharesResponse
### impl UnwindSafe for GetResourceSharesResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ListPendingInvitationResourcesRequest
===
```
pub struct ListPendingInvitationResourcesRequest {
pub max_results: Option<i64>,
pub next_token: Option<String>,
pub resource_share_invitation_arn: String,
}
```
Fields
---
`max_results: Option<i64>`The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned `nextToken` value.
`next_token: Option<String>`The token for the next page of results.
`resource_share_invitation_arn: String`The Amazon Resource Name (ARN) of the invitation.
Trait Implementations
---
source### impl Clone for ListPendingInvitationResourcesRequest
source#### fn clone(&self) -> ListPendingInvitationResourcesRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListPendingInvitationResourcesRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListPendingInvitationResourcesRequest
source#### fn default() -> ListPendingInvitationResourcesRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListPendingInvitationResourcesRequest> for ListPendingInvitationResourcesRequest
source#### fn eq(&self, other: &ListPendingInvitationResourcesRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListPendingInvitationResourcesRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListPendingInvitationResourcesRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListPendingInvitationResourcesRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListPendingInvitationResourcesRequest
### impl Send for ListPendingInvitationResourcesRequest
### impl Sync for ListPendingInvitationResourcesRequest
### impl Unpin for ListPendingInvitationResourcesRequest
### impl UnwindSafe for ListPendingInvitationResourcesRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::ListPendingInvitationResourcesResponse
===
```
pub struct ListPendingInvitationResourcesResponse {
pub next_token: Option<String>,
pub resources: Option<Vec<Resource>>,
}
```
Fields
---
`next_token: Option<String>`The token to use to retrieve the next page of results. This value is `null` when there are no more results to return.
`resources: Option<Vec<Resource>>`Information about the resources included the resource share.
Trait Implementations
---
source### impl Clone for ListPendingInvitationResourcesResponse
source#### fn clone(&self) -> ListPendingInvitationResourcesResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListPendingInvitationResourcesResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListPendingInvitationResourcesResponse
source#### fn default() -> ListPendingInvitationResourcesResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListPendingInvitationResourcesResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListPendingInvitationResourcesResponse> for ListPendingInvitationResourcesResponse
source#### fn eq(&self, other: &ListPendingInvitationResourcesResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListPendingInvitationResourcesResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListPendingInvitationResourcesResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListPendingInvitationResourcesResponse
### impl Send for ListPendingInvitationResourcesResponse
### impl Sync for ListPendingInvitationResourcesResponse
### impl Unpin for ListPendingInvitationResourcesResponse
### impl UnwindSafe for ListPendingInvitationResourcesResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ListPermissionsRequest
===
```
pub struct ListPermissionsRequest {
pub max_results: Option<i64>,
pub next_token: Option<String>,
pub resource_type: Option<String>,
}
```
Fields
---
`max_results: Option<i64>`The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned `nextToken` value.
`next_token: Option<String>`The token for the next page of results.
`resource_type: Option<String>`Specifies the resource type for which to list permissions. For example, to list only permissions that apply to EC2 subnets, specify `ec2:Subnet`.
Trait Implementations
---
source### impl Clone for ListPermissionsRequest
source#### fn clone(&self) -> ListPermissionsRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListPermissionsRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListPermissionsRequest
source#### fn default() -> ListPermissionsRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListPermissionsRequest> for ListPermissionsRequest
source#### fn eq(&self, other: &ListPermissionsRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListPermissionsRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListPermissionsRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListPermissionsRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListPermissionsRequest
### impl Send for ListPermissionsRequest
### impl Sync for ListPermissionsRequest
### impl Unpin for ListPermissionsRequest
### impl UnwindSafe for ListPermissionsRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::ListPermissionsResponse
===
```
pub struct ListPermissionsResponse {
pub next_token: Option<String>,
pub permissions: Option<Vec<ResourceSharePermissionSummary>>,
}
```
Fields
---
`next_token: Option<String>`The token to use to retrieve the next page of results. This value is `null` when there are no more results to return.
`permissions: Option<Vec<ResourceSharePermissionSummary>>`Information about the permissions.
Trait Implementations
---
source### impl Clone for ListPermissionsResponse
source#### fn clone(&self) -> ListPermissionsResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListPermissionsResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListPermissionsResponse
source#### fn default() -> ListPermissionsResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListPermissionsResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListPermissionsResponse> for ListPermissionsResponse
source#### fn eq(&self, other: &ListPermissionsResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListPermissionsResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListPermissionsResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListPermissionsResponse
### impl Send for ListPermissionsResponse
### impl Sync for ListPermissionsResponse
### impl Unpin for ListPermissionsResponse
### impl UnwindSafe for ListPermissionsResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ListPrincipalsRequest
===
```
pub struct ListPrincipalsRequest {
pub max_results: Option<i64>,
pub next_token: Option<String>,
pub principals: Option<Vec<String>>,
pub resource_arn: Option<String>,
pub resource_owner: String,
pub resource_share_arns: Option<Vec<String>>,
pub resource_type: Option<String>,
}
```
Fields
---
`max_results: Option<i64>`The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned `nextToken` value.
`next_token: Option<String>`The token for the next page of results.
`principals: Option<Vec<String>>`The principals.
`resource_arn: Option<String>`The Amazon Resource Name (ARN) of the resource.
`resource_owner: String`The type of owner.
`resource_share_arns: Option<Vec<String>>`The Amazon Resource Names (ARN) of the resource shares.
`resource_type: Option<String>`The resource type.
Valid values: `acm-pca:CertificateAuthority` | `appmesh:Mesh` | `codebuild:Project` | `codebuild:ReportGroup` | `ec2:CapacityReservation` | `ec2:DedicatedHost` | `ec2:LocalGatewayRouteTable` | `ec2:PrefixList` | `ec2:Subnet` | `ec2:TrafficMirrorTarget` | `ec2:TransitGateway` | `imagebuilder:Component` | `imagebuilder:Image` | `imagebuilder:ImageRecipe` | `imagebuilder:ContainerRecipe` | `glue:Catalog` | `glue:Database` | `glue:Table` | `license-manager:LicenseConfiguration` I `network-firewall:FirewallPolicy` | `network-firewall:StatefulRuleGroup` | `network-firewall:StatelessRuleGroup` | `outposts:Outpost` | `resource-groups:Group` | `rds:Cluster` | `route53resolver:ResolverQueryLogConfig` | `route53resolver:ResolverRule`
Trait Implementations
---
source### impl Clone for ListPrincipalsRequest
source#### fn clone(&self) -> ListPrincipalsRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListPrincipalsRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListPrincipalsRequest
source#### fn default() -> ListPrincipalsRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListPrincipalsRequest> for ListPrincipalsRequest
source#### fn eq(&self, other: &ListPrincipalsRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListPrincipalsRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListPrincipalsRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListPrincipalsRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListPrincipalsRequest
### impl Send for ListPrincipalsRequest
### impl Sync for ListPrincipalsRequest
### impl Unpin for ListPrincipalsRequest
### impl UnwindSafe for ListPrincipalsRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::ListPrincipalsResponse
===
```
pub struct ListPrincipalsResponse {
pub next_token: Option<String>,
pub principals: Option<Vec<Principal>>,
}
```
Fields
---
`next_token: Option<String>`The token to use to retrieve the next page of results. This value is `null` when there are no more results to return.
`principals: Option<Vec<Principal>>`The principals.
Trait Implementations
---
source### impl Clone for ListPrincipalsResponse
source#### fn clone(&self) -> ListPrincipalsResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListPrincipalsResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListPrincipalsResponse
source#### fn default() -> ListPrincipalsResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListPrincipalsResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListPrincipalsResponse> for ListPrincipalsResponse
source#### fn eq(&self, other: &ListPrincipalsResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListPrincipalsResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListPrincipalsResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListPrincipalsResponse
### impl Send for ListPrincipalsResponse
### impl Sync for ListPrincipalsResponse
### impl Unpin for ListPrincipalsResponse
### impl UnwindSafe for ListPrincipalsResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ListResourceSharePermissionsRequest
===
```
pub struct ListResourceSharePermissionsRequest {
pub max_results: Option<i64>,
pub next_token: Option<String>,
pub resource_share_arn: String,
}
```
Fields
---
`max_results: Option<i64>`The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned `nextToken` value.
`next_token: Option<String>`The token for the next page of results.
`resource_share_arn: String`The Amazon Resource Name (ARN) of the resource share.
Trait Implementations
---
source### impl Clone for ListResourceSharePermissionsRequest
source#### fn clone(&self) -> ListResourceSharePermissionsRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListResourceSharePermissionsRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListResourceSharePermissionsRequest
source#### fn default() -> ListResourceSharePermissionsRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListResourceSharePermissionsRequest> for ListResourceSharePermissionsRequest
source#### fn eq(&self, other: &ListResourceSharePermissionsRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListResourceSharePermissionsRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListResourceSharePermissionsRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListResourceSharePermissionsRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListResourceSharePermissionsRequest
### impl Send for ListResourceSharePermissionsRequest
### impl Sync for ListResourceSharePermissionsRequest
### impl Unpin for ListResourceSharePermissionsRequest
### impl UnwindSafe for ListResourceSharePermissionsRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::ListResourceSharePermissionsResponse
===
```
pub struct ListResourceSharePermissionsResponse {
pub next_token: Option<String>,
pub permissions: Option<Vec<ResourceSharePermissionSummary>>,
}
```
Fields
---
`next_token: Option<String>`The token to use to retrieve the next page of results. This value is `null` when there are no more results to return.
`permissions: Option<Vec<ResourceSharePermissionSummary>>`The permissions associated with the resource share.
Trait Implementations
---
source### impl Clone for ListResourceSharePermissionsResponse
source#### fn clone(&self) -> ListResourceSharePermissionsResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListResourceSharePermissionsResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListResourceSharePermissionsResponse
source#### fn default() -> ListResourceSharePermissionsResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListResourceSharePermissionsResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListResourceSharePermissionsResponse> for ListResourceSharePermissionsResponse
source#### fn eq(&self, other: &ListResourceSharePermissionsResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListResourceSharePermissionsResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListResourceSharePermissionsResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListResourceSharePermissionsResponse
### impl Send for ListResourceSharePermissionsResponse
### impl Sync for ListResourceSharePermissionsResponse
### impl Unpin for ListResourceSharePermissionsResponse
### impl UnwindSafe for ListResourceSharePermissionsResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ListResourceTypesRequest
===
```
pub struct ListResourceTypesRequest {
pub max_results: Option<i64>,
pub next_token: Option<String>,
}
```
Fields
---
`max_results: Option<i64>`The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned `nextToken` value.
`next_token: Option<String>`The token for the next page of results.
Trait Implementations
---
source### impl Clone for ListResourceTypesRequest
source#### fn clone(&self) -> ListResourceTypesRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListResourceTypesRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListResourceTypesRequest
source#### fn default() -> ListResourceTypesRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListResourceTypesRequest> for ListResourceTypesRequest
source#### fn eq(&self, other: &ListResourceTypesRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListResourceTypesRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListResourceTypesRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListResourceTypesRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListResourceTypesRequest
### impl Send for ListResourceTypesRequest
### impl Sync for ListResourceTypesRequest
### impl Unpin for ListResourceTypesRequest
### impl UnwindSafe for ListResourceTypesRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::ListResourceTypesResponse
===
```
pub struct ListResourceTypesResponse {
pub next_token: Option<String>,
pub resource_types: Option<Vec<ServiceNameAndResourceType>>,
}
```
Fields
---
`next_token: Option<String>`The token to use to retrieve the next page of results. This value is `null` when there are no more results to return.
`resource_types: Option<Vec<ServiceNameAndResourceType>>`The shareable resource types supported by AWS RAM.
Trait Implementations
---
source### impl Clone for ListResourceTypesResponse
source#### fn clone(&self) -> ListResourceTypesResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListResourceTypesResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListResourceTypesResponse
source#### fn default() -> ListResourceTypesResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListResourceTypesResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListResourceTypesResponse> for ListResourceTypesResponse
source#### fn eq(&self, other: &ListResourceTypesResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListResourceTypesResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListResourceTypesResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListResourceTypesResponse
### impl Send for ListResourceTypesResponse
### impl Sync for ListResourceTypesResponse
### impl Unpin for ListResourceTypesResponse
### impl UnwindSafe for ListResourceTypesResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ListResourcesRequest
===
```
pub struct ListResourcesRequest {
pub max_results: Option<i64>,
pub next_token: Option<String>,
pub principal: Option<String>,
pub resource_arns: Option<Vec<String>>,
pub resource_owner: String,
pub resource_share_arns: Option<Vec<String>>,
pub resource_type: Option<String>,
}
```
Fields
---
`max_results: Option<i64>`The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned `nextToken` value.
`next_token: Option<String>`The token for the next page of results.
`principal: Option<String>`The principal.
`resource_arns: Option<Vec<String>>`The Amazon Resource Names (ARN) of the resources.
`resource_owner: String`The type of owner.
`resource_share_arns: Option<Vec<String>>`The Amazon Resource Names (ARN) of the resource shares.
`resource_type: Option<String>`The resource type.
Valid values: `acm-pca:CertificateAuthority` | `appmesh:Mesh` | `codebuild:Project` | `codebuild:ReportGroup` | `ec2:CapacityReservation` | `ec2:DedicatedHost` | `ec2:LocalGatewayRouteTable` | `ec2:PrefixList` | `ec2:Subnet` | `ec2:TrafficMirrorTarget` | `ec2:TransitGateway` | `imagebuilder:Component` | `imagebuilder:Image` | `imagebuilder:ImageRecipe` | `imagebuilder:ContainerRecipe` | `glue:Catalog` | `glue:Database` | `glue:Table` | `license-manager:LicenseConfiguration` I `network-firewall:FirewallPolicy` | `network-firewall:StatefulRuleGroup` | `network-firewall:StatelessRuleGroup` | `outposts:Outpost` | `resource-groups:Group` | `rds:Cluster` | `route53resolver:ResolverQueryLogConfig` | `route53resolver:ResolverRule`
Trait Implementations
---
source### impl Clone for ListResourcesRequest
source#### fn clone(&self) -> ListResourcesRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListResourcesRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListResourcesRequest
source#### fn default() -> ListResourcesRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListResourcesRequest> for ListResourcesRequest
source#### fn eq(&self, other: &ListResourcesRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListResourcesRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListResourcesRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListResourcesRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListResourcesRequest
### impl Send for ListResourcesRequest
### impl Sync for ListResourcesRequest
### impl Unpin for ListResourcesRequest
### impl UnwindSafe for ListResourcesRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::ListResourcesResponse
===
```
pub struct ListResourcesResponse {
pub next_token: Option<String>,
pub resources: Option<Vec<Resource>>,
}
```
Fields
---
`next_token: Option<String>`The token to use to retrieve the next page of results. This value is `null` when there are no more results to return.
`resources: Option<Vec<Resource>>`Information about the resources.
Trait Implementations
---
source### impl Clone for ListResourcesResponse
source#### fn clone(&self) -> ListResourcesResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListResourcesResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListResourcesResponse
source#### fn default() -> ListResourcesResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListResourcesResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListResourcesResponse> for ListResourcesResponse
source#### fn eq(&self, other: &ListResourcesResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListResourcesResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListResourcesResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListResourcesResponse
### impl Send for ListResourcesResponse
### impl Sync for ListResourcesResponse
### impl Unpin for ListResourcesResponse
### impl UnwindSafe for ListResourcesResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::Principal
===
```
pub struct Principal {
pub creation_time: Option<f64>,
pub external: Option<bool>,
pub id: Option<String>,
pub last_updated_time: Option<f64>,
pub resource_share_arn: Option<String>,
}
```
Describes a principal for use with AWS Resource Access Manager.
Fields
---
`creation_time: Option<f64>`The time when the principal was associated with the resource share.
`external: Option<bool>`Indicates whether the principal belongs to the same AWS organization as the AWS account that owns the resource share.
`id: Option<String>`The ID of the principal.
`last_updated_time: Option<f64>`The time when the association was last updated.
`resource_share_arn: Option<String>`The Amazon Resource Name (ARN) of the resource share.
Trait Implementations
---
source### impl Clone for Principal
source#### fn clone(&self) -> Principal
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Principal
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Principal
source#### fn default() -> Principal
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for Principal
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<Principal> for Principal
source#### fn eq(&self, other: &Principal) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Principal) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for Principal
Auto Trait Implementations
---
### impl RefUnwindSafe for Principal
### impl Send for Principal
### impl Sync for Principal
### impl Unpin for Principal
### impl UnwindSafe for Principal
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::PromoteResourceShareCreatedFromPolicyRequest
===
```
pub struct PromoteResourceShareCreatedFromPolicyRequest {
pub resource_share_arn: String,
}
```
Fields
---
`resource_share_arn: String`The ARN of the resource share to promote.
Trait Implementations
---
source### impl Clone for PromoteResourceShareCreatedFromPolicyRequest
source#### fn clone(&self) -> PromoteResourceShareCreatedFromPolicyRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PromoteResourceShareCreatedFromPolicyRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PromoteResourceShareCreatedFromPolicyRequest
source#### fn default() -> PromoteResourceShareCreatedFromPolicyRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<PromoteResourceShareCreatedFromPolicyRequest> for PromoteResourceShareCreatedFromPolicyRequest
source#### fn eq(&self, other: &PromoteResourceShareCreatedFromPolicyRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PromoteResourceShareCreatedFromPolicyRequest) -> bool
This method tests for `!=`.
source### impl Serialize for PromoteResourceShareCreatedFromPolicyRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for PromoteResourceShareCreatedFromPolicyRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for PromoteResourceShareCreatedFromPolicyRequest
### impl Send for PromoteResourceShareCreatedFromPolicyRequest
### impl Sync for PromoteResourceShareCreatedFromPolicyRequest
### impl Unpin for PromoteResourceShareCreatedFromPolicyRequest
### impl UnwindSafe for PromoteResourceShareCreatedFromPolicyRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::PromoteResourceShareCreatedFromPolicyResponse
===
```
pub struct PromoteResourceShareCreatedFromPolicyResponse {
pub return_value: Option<bool>,
}
```
Fields
---
`return_value: Option<bool>`Indicates whether the request succeeded.
Trait Implementations
---
source### impl Clone for PromoteResourceShareCreatedFromPolicyResponse
source#### fn clone(&self) -> PromoteResourceShareCreatedFromPolicyResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PromoteResourceShareCreatedFromPolicyResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PromoteResourceShareCreatedFromPolicyResponse
source#### fn default() -> PromoteResourceShareCreatedFromPolicyResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for PromoteResourceShareCreatedFromPolicyResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<PromoteResourceShareCreatedFromPolicyResponse> for PromoteResourceShareCreatedFromPolicyResponse
source#### fn eq(&self, other: &PromoteResourceShareCreatedFromPolicyResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PromoteResourceShareCreatedFromPolicyResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for PromoteResourceShareCreatedFromPolicyResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for PromoteResourceShareCreatedFromPolicyResponse
### impl Send for PromoteResourceShareCreatedFromPolicyResponse
### impl Sync for PromoteResourceShareCreatedFromPolicyResponse
### impl Unpin for PromoteResourceShareCreatedFromPolicyResponse
### impl UnwindSafe for PromoteResourceShareCreatedFromPolicyResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::RejectResourceShareInvitationRequest
===
```
pub struct RejectResourceShareInvitationRequest {
pub client_token: Option<String>,
pub resource_share_invitation_arn: String,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`resource_share_invitation_arn: String`The Amazon Resource Name (ARN) of the invitation.
Trait Implementations
---
source### impl Clone for RejectResourceShareInvitationRequest
source#### fn clone(&self) -> RejectResourceShareInvitationRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RejectResourceShareInvitationRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RejectResourceShareInvitationRequest
source#### fn default() -> RejectResourceShareInvitationRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<RejectResourceShareInvitationRequest> for RejectResourceShareInvitationRequest
source#### fn eq(&self, other: &RejectResourceShareInvitationRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RejectResourceShareInvitationRequest) -> bool
This method tests for `!=`.
source### impl Serialize for RejectResourceShareInvitationRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for RejectResourceShareInvitationRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for RejectResourceShareInvitationRequest
### impl Send for RejectResourceShareInvitationRequest
### impl Sync for RejectResourceShareInvitationRequest
### impl Unpin for RejectResourceShareInvitationRequest
### impl UnwindSafe for RejectResourceShareInvitationRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::RejectResourceShareInvitationResponse
===
```
pub struct RejectResourceShareInvitationResponse {
pub client_token: Option<String>,
pub resource_share_invitation: Option<ResourceShareInvitation>,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`resource_share_invitation: Option<ResourceShareInvitation>`Information about the invitation.
Trait Implementations
---
source### impl Clone for RejectResourceShareInvitationResponse
source#### fn clone(&self) -> RejectResourceShareInvitationResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RejectResourceShareInvitationResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RejectResourceShareInvitationResponse
source#### fn default() -> RejectResourceShareInvitationResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for RejectResourceShareInvitationResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<RejectResourceShareInvitationResponse> for RejectResourceShareInvitationResponse
source#### fn eq(&self, other: &RejectResourceShareInvitationResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RejectResourceShareInvitationResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RejectResourceShareInvitationResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for RejectResourceShareInvitationResponse
### impl Send for RejectResourceShareInvitationResponse
### impl Sync for RejectResourceShareInvitationResponse
### impl Unpin for RejectResourceShareInvitationResponse
### impl UnwindSafe for RejectResourceShareInvitationResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::Resource
===
```
pub struct Resource {
pub arn: Option<String>,
pub creation_time: Option<f64>,
pub last_updated_time: Option<f64>,
pub resource_group_arn: Option<String>,
pub resource_share_arn: Option<String>,
pub status: Option<String>,
pub status_message: Option<String>,
pub type_: Option<String>,
}
```
Describes a resource associated with a resource share.
Fields
---
`arn: Option<String>`The Amazon Resource Name (ARN) of the resource.
`creation_time: Option<f64>`The time when the resource was associated with the resource share.
`last_updated_time: Option<f64>`The time when the association was last updated.
`resource_group_arn: Option<String>`The ARN of the resource group. This value is returned only if the resource is a resource group.
`resource_share_arn: Option<String>`The Amazon Resource Name (ARN) of the resource share.
`status: Option<String>`The status of the resource.
`status_message: Option<String>`A message about the status of the resource.
`type_: Option<String>`The resource type.
Trait Implementations
---
source### impl Clone for Resource
source#### fn clone(&self) -> Resource
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Resource
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Resource
source#### fn default() -> Resource
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for Resource
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<Resource> for Resource
source#### fn eq(&self, other: &Resource) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Resource) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for Resource
Auto Trait Implementations
---
### impl RefUnwindSafe for Resource
### impl Send for Resource
### impl Sync for Resource
### impl Unpin for Resource
### impl UnwindSafe for Resource
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ResourceShare
===
```
pub struct ResourceShare {
pub allow_external_principals: Option<bool>,
pub creation_time: Option<f64>,
pub feature_set: Option<String>,
pub last_updated_time: Option<f64>,
pub name: Option<String>,
pub owning_account_id: Option<String>,
pub resource_share_arn: Option<String>,
pub status: Option<String>,
pub status_message: Option<String>,
pub tags: Option<Vec<Tag>>,
}
```
Describes a resource share.
Fields
---
`allow_external_principals: Option<bool>`Indicates whether principals outside your AWS organization can be associated with a resource share.
`creation_time: Option<f64>`The time when the resource share was created.
`feature_set: Option<String>`Indicates how the resource share was created. Possible values include:
* `CREATED*FROM*POLICY` - Indicates that the resource share was created from an AWS Identity and Access Management (AWS IAM) policy attached to a resource. These resource shares are visible only to the AWS account that created it. They cannot be modified in AWS RAM.
* `PROMOTING*TO*STANDARD` - The resource share is in the process of being promoted. For more information, see PromoteResourceShareCreatedFromPolicy.
* `STANDARD` - Indicates that the resource share was created in AWS RAM using the console or APIs. These resource shares are visible to all principals. They can be modified in AWS RAM.
`last_updated_time: Option<f64>`The time when the resource share was last updated.
`name: Option<String>`The name of the resource share.
`owning_account_id: Option<String>`The ID of the AWS account that owns the resource share.
`resource_share_arn: Option<String>`The Amazon Resource Name (ARN) of the resource share.
`status: Option<String>`The status of the resource share.
`status_message: Option<String>`A message about the status of the resource share.
`tags: Option<Vec<Tag>>`The tags for the resource share.
Trait Implementations
---
source### impl Clone for ResourceShare
source#### fn clone(&self) -> ResourceShare
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ResourceShare
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ResourceShare
source#### fn default() -> ResourceShare
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ResourceShare
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ResourceShare> for ResourceShare
source#### fn eq(&self, other: &ResourceShare) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ResourceShare) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ResourceShare
Auto Trait Implementations
---
### impl RefUnwindSafe for ResourceShare
### impl Send for ResourceShare
### impl Sync for ResourceShare
### impl Unpin for ResourceShare
### impl UnwindSafe for ResourceShare
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ResourceShareAssociation
===
```
pub struct ResourceShareAssociation {
pub associated_entity: Option<String>,
pub association_type: Option<String>,
pub creation_time: Option<f64>,
pub external: Option<bool>,
pub last_updated_time: Option<f64>,
pub resource_share_arn: Option<String>,
pub resource_share_name: Option<String>,
pub status: Option<String>,
pub status_message: Option<String>,
}
```
Describes an association with a resource share.
Fields
---
`associated_entity: Option<String>`The associated entity. For resource associations, this is the ARN of the resource. For principal associations, this is the ID of an AWS account or the ARN of an OU or organization from AWS Organizations.
`association_type: Option<String>`The association type.
`creation_time: Option<f64>`The time when the association was created.
`external: Option<bool>`Indicates whether the principal belongs to the same AWS organization as the AWS account that owns the resource share.
`last_updated_time: Option<f64>`The time when the association was last updated.
`resource_share_arn: Option<String>`The Amazon Resource Name (ARN) of the resource share.
`resource_share_name: Option<String>`The name of the resource share.
`status: Option<String>`The status of the association.
`status_message: Option<String>`A message about the status of the association.
Trait Implementations
---
source### impl Clone for ResourceShareAssociation
source#### fn clone(&self) -> ResourceShareAssociation
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ResourceShareAssociation
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ResourceShareAssociation
source#### fn default() -> ResourceShareAssociation
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ResourceShareAssociation
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ResourceShareAssociation> for ResourceShareAssociation
source#### fn eq(&self, other: &ResourceShareAssociation) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ResourceShareAssociation) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ResourceShareAssociation
Auto Trait Implementations
---
### impl RefUnwindSafe for ResourceShareAssociation
### impl Send for ResourceShareAssociation
### impl Sync for ResourceShareAssociation
### impl Unpin for ResourceShareAssociation
### impl UnwindSafe for ResourceShareAssociation
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ResourceShareInvitation
===
```
pub struct ResourceShareInvitation {
pub invitation_timestamp: Option<f64>,
pub receiver_account_id: Option<String>,
pub receiver_arn: Option<String>,
pub resource_share_arn: Option<String>,
pub resource_share_invitation_arn: Option<String>,
pub resource_share_name: Option<String>,
pub sender_account_id: Option<String>,
pub status: Option<String>,
}
```
Describes an invitation to join a resource share.
Fields
---
`invitation_timestamp: Option<f64>`The date and time when the invitation was sent.
`receiver_account_id: Option<String>`The ID of the AWS account that received the invitation.
`receiver_arn: Option<String>`The Amazon Resource Name (ARN) of the IAM user or IAM role that received the invitation.
`resource_share_arn: Option<String>`The Amazon Resource Name (ARN) of the resource share.
`resource_share_invitation_arn: Option<String>`The Amazon Resource Name (ARN) of the invitation.
`resource_share_name: Option<String>`The name of the resource share.
`sender_account_id: Option<String>`The ID of the AWS account that sent the invitation.
`status: Option<String>`The status of the invitation.
Trait Implementations
---
source### impl Clone for ResourceShareInvitation
source#### fn clone(&self) -> ResourceShareInvitation
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ResourceShareInvitation
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ResourceShareInvitation
source#### fn default() -> ResourceShareInvitation
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ResourceShareInvitation
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ResourceShareInvitation> for ResourceShareInvitation
source#### fn eq(&self, other: &ResourceShareInvitation) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ResourceShareInvitation) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ResourceShareInvitation
Auto Trait Implementations
---
### impl RefUnwindSafe for ResourceShareInvitation
### impl Send for ResourceShareInvitation
### impl Sync for ResourceShareInvitation
### impl Unpin for ResourceShareInvitation
### impl UnwindSafe for ResourceShareInvitation
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ResourceSharePermissionDetail
===
```
pub struct ResourceSharePermissionDetail {
pub arn: Option<String>,
pub creation_time: Option<f64>,
pub default_version: Option<bool>,
pub is_resource_type_default: Option<bool>,
pub last_updated_time: Option<f64>,
pub name: Option<String>,
pub permission: Option<String>,
pub resource_type: Option<String>,
pub version: Option<String>,
}
```
Information about an AWS RAM permission.
Fields
---
`arn: Option<String>`The ARN of the permission.
`creation_time: Option<f64>`The date and time when the permission was created.
`default_version: Option<bool>`Specifies whether the version of the permission is set to the default version for this permission.
`is_resource_type_default: Option<bool>`Specifies whether the version of the permission is set to the default version for this resource type.
`last_updated_time: Option<f64>`The date and time when the permission was last updated.
`name: Option<String>`The name of the permission.
`permission: Option<String>`The permission's effect and actions in JSON format. The `effect` indicates whether the actions are allowed or denied. The `actions` list the API actions to which the principal is granted or denied access.
`resource_type: Option<String>`The resource type to which the permission applies.
`version: Option<String>`The identifier for the version of the permission.
Trait Implementations
---
source### impl Clone for ResourceSharePermissionDetail
source#### fn clone(&self) -> ResourceSharePermissionDetail
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ResourceSharePermissionDetail
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ResourceSharePermissionDetail
source#### fn default() -> ResourceSharePermissionDetail
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ResourceSharePermissionDetail
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ResourceSharePermissionDetail> for ResourceSharePermissionDetail
source#### fn eq(&self, other: &ResourceSharePermissionDetail) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ResourceSharePermissionDetail) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ResourceSharePermissionDetail
Auto Trait Implementations
---
### impl RefUnwindSafe for ResourceSharePermissionDetail
### impl Send for ResourceSharePermissionDetail
### impl Sync for ResourceSharePermissionDetail
### impl Unpin for ResourceSharePermissionDetail
### impl UnwindSafe for ResourceSharePermissionDetail
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ResourceSharePermissionSummary
===
```
pub struct ResourceSharePermissionSummary {
pub arn: Option<String>,
pub creation_time: Option<f64>,
pub default_version: Option<bool>,
pub is_resource_type_default: Option<bool>,
pub last_updated_time: Option<f64>,
pub name: Option<String>,
pub resource_type: Option<String>,
pub status: Option<String>,
pub version: Option<String>,
}
```
Information about a permission that is associated with a resource share.
Fields
---
`arn: Option<String>`The ARN of the permission.
`creation_time: Option<f64>`The date and time when the permission was created.
`default_version: Option<bool>`Specifies whether the version of the permission is set to the default version for this permission.
`is_resource_type_default: Option<bool>`Specifies whether the version of the permission is set to the default version for this resource type.
`last_updated_time: Option<f64>`The date and time when the permission was last updated.
`name: Option<String>`The name of the permission.
`resource_type: Option<String>`The type of resource to which the permission applies.
`status: Option<String>`The current status of the permission.
`version: Option<String>`The identifier for the version of the permission.
Trait Implementations
---
source### impl Clone for ResourceSharePermissionSummary
source#### fn clone(&self) -> ResourceSharePermissionSummary
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ResourceSharePermissionSummary
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ResourceSharePermissionSummary
source#### fn default() -> ResourceSharePermissionSummary
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ResourceSharePermissionSummary
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ResourceSharePermissionSummary> for ResourceSharePermissionSummary
source#### fn eq(&self, other: &ResourceSharePermissionSummary) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ResourceSharePermissionSummary) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ResourceSharePermissionSummary
Auto Trait Implementations
---
### impl RefUnwindSafe for ResourceSharePermissionSummary
### impl Send for ResourceSharePermissionSummary
### impl Sync for ResourceSharePermissionSummary
### impl Unpin for ResourceSharePermissionSummary
### impl UnwindSafe for ResourceSharePermissionSummary
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::ServiceNameAndResourceType
===
```
pub struct ServiceNameAndResourceType {
pub resource_type: Option<String>,
pub service_name: Option<String>,
}
```
Information about the shareable resource types and the AWS services to which they belong.
Fields
---
`resource_type: Option<String>`The shareable resource types.
`service_name: Option<String>`The name of the AWS services to which the resources belong.
Trait Implementations
---
source### impl Clone for ServiceNameAndResourceType
source#### fn clone(&self) -> ServiceNameAndResourceType
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ServiceNameAndResourceType
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ServiceNameAndResourceType
source#### fn default() -> ServiceNameAndResourceType
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ServiceNameAndResourceType
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ServiceNameAndResourceType> for ServiceNameAndResourceType
source#### fn eq(&self, other: &ServiceNameAndResourceType) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ServiceNameAndResourceType) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ServiceNameAndResourceType
Auto Trait Implementations
---
### impl RefUnwindSafe for ServiceNameAndResourceType
### impl Send for ServiceNameAndResourceType
### impl Sync for ServiceNameAndResourceType
### impl Unpin for ServiceNameAndResourceType
### impl UnwindSafe for ServiceNameAndResourceType
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::Tag
===
```
pub struct Tag {
pub key: Option<String>,
pub value: Option<String>,
}
```
Information about a tag.
Fields
---
`key: Option<String>`The key of the tag.
`value: Option<String>`The value of the tag.
Trait Implementations
---
source### impl Clone for Tag
source#### fn clone(&self) -> Tag
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Tag
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Tag
source#### fn default() -> Tag
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for Tag
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<Tag> for Tag
source#### fn eq(&self, other: &Tag) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Tag) -> bool
This method tests for `!=`.
source### impl Serialize for Tag
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for Tag
Auto Trait Implementations
---
### impl RefUnwindSafe for Tag
### impl Send for Tag
### impl Sync for Tag
### impl Unpin for Tag
### impl UnwindSafe for Tag
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::TagFilter
===
```
pub struct TagFilter {
pub tag_key: Option<String>,
pub tag_values: Option<Vec<String>>,
}
```
Used to filter information based on tags.
Fields
---
`tag_key: Option<String>`The tag key.
`tag_values: Option<Vec<String>>`The tag values.
Trait Implementations
---
source### impl Clone for TagFilter
source#### fn clone(&self) -> TagFilter
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TagFilter
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TagFilter
source#### fn default() -> TagFilter
Returns the “default value” for a type. Read more
source### impl PartialEq<TagFilter> for TagFilter
source#### fn eq(&self, other: &TagFilter) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TagFilter) -> bool
This method tests for `!=`.
source### impl Serialize for TagFilter
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TagFilter
Auto Trait Implementations
---
### impl RefUnwindSafe for TagFilter
### impl Send for TagFilter
### impl Sync for TagFilter
### impl Unpin for TagFilter
### impl UnwindSafe for TagFilter
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::TagResourceRequest
===
```
pub struct TagResourceRequest {
pub resource_share_arn: String,
pub tags: Vec<Tag>,
}
```
Fields
---
`resource_share_arn: String`The Amazon Resource Name (ARN) of the resource share.
`tags: Vec<Tag>`One or more tags.
Trait Implementations
---
source### impl Clone for TagResourceRequest
source#### fn clone(&self) -> TagResourceRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TagResourceRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TagResourceRequest
source#### fn default() -> TagResourceRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<TagResourceRequest> for TagResourceRequest
source#### fn eq(&self, other: &TagResourceRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TagResourceRequest) -> bool
This method tests for `!=`.
source### impl Serialize for TagResourceRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TagResourceRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for TagResourceRequest
### impl Send for TagResourceRequest
### impl Sync for TagResourceRequest
### impl Unpin for TagResourceRequest
### impl UnwindSafe for TagResourceRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::TagResourceResponse
===
```
pub struct TagResourceResponse {}
```
Trait Implementations
---
source### impl Clone for TagResourceResponse
source#### fn clone(&self) -> TagResourceResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TagResourceResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TagResourceResponse
source#### fn default() -> TagResourceResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for TagResourceResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<TagResourceResponse> for TagResourceResponse
source#### fn eq(&self, other: &TagResourceResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for TagResourceResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for TagResourceResponse
### impl Send for TagResourceResponse
### impl Sync for TagResourceResponse
### impl Unpin for TagResourceResponse
### impl UnwindSafe for TagResourceResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::UntagResourceRequest
===
```
pub struct UntagResourceRequest {
pub resource_share_arn: String,
pub tag_keys: Vec<String>,
}
```
Fields
---
`resource_share_arn: String`The Amazon Resource Name (ARN) of the resource share.
`tag_keys: Vec<String>`The tag keys of the tags to remove.
Trait Implementations
---
source### impl Clone for UntagResourceRequest
source#### fn clone(&self) -> UntagResourceRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UntagResourceRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UntagResourceRequest
source#### fn default() -> UntagResourceRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<UntagResourceRequest> for UntagResourceRequest
source#### fn eq(&self, other: &UntagResourceRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UntagResourceRequest) -> bool
This method tests for `!=`.
source### impl Serialize for UntagResourceRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UntagResourceRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for UntagResourceRequest
### impl Send for UntagResourceRequest
### impl Sync for UntagResourceRequest
### impl Unpin for UntagResourceRequest
### impl UnwindSafe for UntagResourceRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::UntagResourceResponse
===
```
pub struct UntagResourceResponse {}
```
Trait Implementations
---
source### impl Clone for UntagResourceResponse
source#### fn clone(&self) -> UntagResourceResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UntagResourceResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UntagResourceResponse
source#### fn default() -> UntagResourceResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UntagResourceResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UntagResourceResponse> for UntagResourceResponse
source#### fn eq(&self, other: &UntagResourceResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UntagResourceResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for UntagResourceResponse
### impl Send for UntagResourceResponse
### impl Sync for UntagResourceResponse
### impl Unpin for UntagResourceResponse
### impl UnwindSafe for UntagResourceResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_ram::UpdateResourceShareRequest
===
```
pub struct UpdateResourceShareRequest {
pub allow_external_principals: Option<bool>,
pub client_token: Option<String>,
pub name: Option<String>,
pub resource_share_arn: String,
}
```
Fields
---
`allow_external_principals: Option<bool>`Indicates whether principals outside your AWS organization can be associated with a resource share.
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`name: Option<String>`The name of the resource share.
`resource_share_arn: String`The Amazon Resource Name (ARN) of the resource share.
Trait Implementations
---
source### impl Clone for UpdateResourceShareRequest
source#### fn clone(&self) -> UpdateResourceShareRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateResourceShareRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateResourceShareRequest
source#### fn default() -> UpdateResourceShareRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateResourceShareRequest> for UpdateResourceShareRequest
source#### fn eq(&self, other: &UpdateResourceShareRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateResourceShareRequest) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateResourceShareRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateResourceShareRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateResourceShareRequest
### impl Send for UpdateResourceShareRequest
### impl Sync for UpdateResourceShareRequest
### impl Unpin for UpdateResourceShareRequest
### impl UnwindSafe for UpdateResourceShareRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_ram::UpdateResourceShareResponse
===
```
pub struct UpdateResourceShareResponse {
pub client_token: Option<String>,
pub resource_share: Option<ResourceShare>,
}
```
Fields
---
`client_token: Option<String>`A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
`resource_share: Option<ResourceShare>`Information about the resource share.
Trait Implementations
---
source### impl Clone for UpdateResourceShareResponse
source#### fn clone(&self) -> UpdateResourceShareResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateResourceShareResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateResourceShareResponse
source#### fn default() -> UpdateResourceShareResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateResourceShareResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateResourceShareResponse> for UpdateResourceShareResponse
source#### fn eq(&self, other: &UpdateResourceShareResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateResourceShareResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateResourceShareResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateResourceShareResponse
### impl Send for UpdateResourceShareResponse
### impl Sync for UpdateResourceShareResponse
### impl Unpin for UpdateResourceShareResponse
### impl UnwindSafe for UpdateResourceShareResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Enum rusoto_ram::AcceptResourceShareInvitationError
===
```
pub enum AcceptResourceShareInvitationError {
IdempotentParameterMismatch(String),
InvalidClientToken(String),
MalformedArn(String),
OperationNotPermitted(String),
ResourceShareInvitationAlreadyAccepted(String),
ResourceShareInvitationAlreadyRejected(String),
ResourceShareInvitationArnNotFound(String),
ResourceShareInvitationExpired(String),
ServerInternal(String),
ServiceUnavailable(String),
}
```
Errors returned by AcceptResourceShareInvitation
Variants
---
### `IdempotentParameterMismatch(String)`
A client token input parameter was reused with an operation, but at least one of the other input parameters is different from the previous call to the operation.
### `InvalidClientToken(String)`
A client token is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ResourceShareInvitationAlreadyAccepted(String)`
The invitation was already accepted.
### `ResourceShareInvitationAlreadyRejected(String)`
The invitation was already rejected.
### `ResourceShareInvitationArnNotFound(String)`
The Amazon Resource Name (ARN) for an invitation was not found.
### `ResourceShareInvitationExpired(String)`
The invitation is expired.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
Implementations
---
source### impl AcceptResourceShareInvitationError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<AcceptResourceShareInvitationErrorTrait Implementations
---
source### impl Debug for AcceptResourceShareInvitationError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for AcceptResourceShareInvitationError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for AcceptResourceShareInvitationError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<AcceptResourceShareInvitationError> for AcceptResourceShareInvitationError
source#### fn eq(&self, other: &AcceptResourceShareInvitationError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AcceptResourceShareInvitationError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AcceptResourceShareInvitationError
Auto Trait Implementations
---
### impl RefUnwindSafe for AcceptResourceShareInvitationError
### impl Send for AcceptResourceShareInvitationError
### impl Sync for AcceptResourceShareInvitationError
### impl Unpin for AcceptResourceShareInvitationError
### impl UnwindSafe for AcceptResourceShareInvitationError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::AssociateResourceShareError
===
```
pub enum AssociateResourceShareError {
IdempotentParameterMismatch(String),
InvalidClientToken(String),
InvalidParameter(String),
InvalidStateTransition(String),
MalformedArn(String),
OperationNotPermitted(String),
ResourceShareLimitExceeded(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by AssociateResourceShare
Variants
---
### `IdempotentParameterMismatch(String)`
A client token input parameter was reused with an operation, but at least one of the other input parameters is different from the previous call to the operation.
### `InvalidClientToken(String)`
A client token is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `InvalidStateTransition(String)`
The requested state transition is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ResourceShareLimitExceeded(String)`
The requested resource share exceeds the limit for your account.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl AssociateResourceShareError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<AssociateResourceShareErrorTrait Implementations
---
source### impl Debug for AssociateResourceShareError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for AssociateResourceShareError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for AssociateResourceShareError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<AssociateResourceShareError> for AssociateResourceShareError
source#### fn eq(&self, other: &AssociateResourceShareError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AssociateResourceShareError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AssociateResourceShareError
Auto Trait Implementations
---
### impl RefUnwindSafe for AssociateResourceShareError
### impl Send for AssociateResourceShareError
### impl Sync for AssociateResourceShareError
### impl Unpin for AssociateResourceShareError
### impl UnwindSafe for AssociateResourceShareError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::AssociateResourceSharePermissionError
===
```
pub enum AssociateResourceSharePermissionError {
InvalidClientToken(String),
InvalidParameter(String),
MalformedArn(String),
OperationNotPermitted(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by AssociateResourceSharePermission
Variants
---
### `InvalidClientToken(String)`
A client token is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl AssociateResourceSharePermissionError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<AssociateResourceSharePermissionErrorTrait Implementations
---
source### impl Debug for AssociateResourceSharePermissionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for AssociateResourceSharePermissionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for AssociateResourceSharePermissionError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<AssociateResourceSharePermissionError> for AssociateResourceSharePermissionError
source#### fn eq(&self, other: &AssociateResourceSharePermissionError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AssociateResourceSharePermissionError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AssociateResourceSharePermissionError
Auto Trait Implementations
---
### impl RefUnwindSafe for AssociateResourceSharePermissionError
### impl Send for AssociateResourceSharePermissionError
### impl Sync for AssociateResourceSharePermissionError
### impl Unpin for AssociateResourceSharePermissionError
### impl UnwindSafe for AssociateResourceSharePermissionError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::CreateResourceShareError
===
```
pub enum CreateResourceShareError {
IdempotentParameterMismatch(String),
InvalidClientToken(String),
InvalidParameter(String),
InvalidStateTransition(String),
MalformedArn(String),
OperationNotPermitted(String),
ResourceShareLimitExceeded(String),
ServerInternal(String),
ServiceUnavailable(String),
TagPolicyViolation(String),
UnknownResource(String),
}
```
Errors returned by CreateResourceShare
Variants
---
### `IdempotentParameterMismatch(String)`
A client token input parameter was reused with an operation, but at least one of the other input parameters is different from the previous call to the operation.
### `InvalidClientToken(String)`
A client token is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `InvalidStateTransition(String)`
The requested state transition is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ResourceShareLimitExceeded(String)`
The requested resource share exceeds the limit for your account.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `TagPolicyViolation(String)`
The specified tag is a reserved word and cannot be used.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl CreateResourceShareError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateResourceShareErrorTrait Implementations
---
source### impl Debug for CreateResourceShareError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateResourceShareError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateResourceShareError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateResourceShareError> for CreateResourceShareError
source#### fn eq(&self, other: &CreateResourceShareError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateResourceShareError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateResourceShareError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateResourceShareError
### impl Send for CreateResourceShareError
### impl Sync for CreateResourceShareError
### impl Unpin for CreateResourceShareError
### impl UnwindSafe for CreateResourceShareError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::DeleteResourceShareError
===
```
pub enum DeleteResourceShareError {
IdempotentParameterMismatch(String),
InvalidClientToken(String),
InvalidParameter(String),
InvalidStateTransition(String),
MalformedArn(String),
OperationNotPermitted(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by DeleteResourceShare
Variants
---
### `IdempotentParameterMismatch(String)`
A client token input parameter was reused with an operation, but at least one of the other input parameters is different from the previous call to the operation.
### `InvalidClientToken(String)`
A client token is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `InvalidStateTransition(String)`
The requested state transition is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl DeleteResourceShareError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteResourceShareErrorTrait Implementations
---
source### impl Debug for DeleteResourceShareError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteResourceShareError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteResourceShareError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteResourceShareError> for DeleteResourceShareError
source#### fn eq(&self, other: &DeleteResourceShareError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteResourceShareError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteResourceShareError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteResourceShareError
### impl Send for DeleteResourceShareError
### impl Sync for DeleteResourceShareError
### impl Unpin for DeleteResourceShareError
### impl UnwindSafe for DeleteResourceShareError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::DisassociateResourceShareError
===
```
pub enum DisassociateResourceShareError {
IdempotentParameterMismatch(String),
InvalidClientToken(String),
InvalidParameter(String),
InvalidStateTransition(String),
MalformedArn(String),
OperationNotPermitted(String),
ResourceShareLimitExceeded(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by DisassociateResourceShare
Variants
---
### `IdempotentParameterMismatch(String)`
A client token input parameter was reused with an operation, but at least one of the other input parameters is different from the previous call to the operation.
### `InvalidClientToken(String)`
A client token is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `InvalidStateTransition(String)`
The requested state transition is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ResourceShareLimitExceeded(String)`
The requested resource share exceeds the limit for your account.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl DisassociateResourceShareError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DisassociateResourceShareErrorTrait Implementations
---
source### impl Debug for DisassociateResourceShareError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DisassociateResourceShareError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DisassociateResourceShareError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DisassociateResourceShareError> for DisassociateResourceShareError
source#### fn eq(&self, other: &DisassociateResourceShareError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DisassociateResourceShareError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DisassociateResourceShareError
Auto Trait Implementations
---
### impl RefUnwindSafe for DisassociateResourceShareError
### impl Send for DisassociateResourceShareError
### impl Sync for DisassociateResourceShareError
### impl Unpin for DisassociateResourceShareError
### impl UnwindSafe for DisassociateResourceShareError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::DisassociateResourceSharePermissionError
===
```
pub enum DisassociateResourceSharePermissionError {
InvalidClientToken(String),
InvalidParameter(String),
InvalidStateTransition(String),
MalformedArn(String),
OperationNotPermitted(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by DisassociateResourceSharePermission
Variants
---
### `InvalidClientToken(String)`
A client token is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `InvalidStateTransition(String)`
The requested state transition is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl DisassociateResourceSharePermissionError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DisassociateResourceSharePermissionErrorTrait Implementations
---
source### impl Debug for DisassociateResourceSharePermissionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DisassociateResourceSharePermissionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DisassociateResourceSharePermissionError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DisassociateResourceSharePermissionError> for DisassociateResourceSharePermissionError
source#### fn eq(&self, other: &DisassociateResourceSharePermissionError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DisassociateResourceSharePermissionError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DisassociateResourceSharePermissionError
Auto Trait Implementations
---
### impl RefUnwindSafe for DisassociateResourceSharePermissionError
### impl Send for DisassociateResourceSharePermissionError
### impl Sync for DisassociateResourceSharePermissionError
### impl Unpin for DisassociateResourceSharePermissionError
### impl UnwindSafe for DisassociateResourceSharePermissionError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::EnableSharingWithAwsOrganizationError
===
```
pub enum EnableSharingWithAwsOrganizationError {
OperationNotPermitted(String),
ServerInternal(String),
ServiceUnavailable(String),
}
```
Errors returned by EnableSharingWithAwsOrganization
Variants
---
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
Implementations
---
source### impl EnableSharingWithAwsOrganizationError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<EnableSharingWithAwsOrganizationErrorTrait Implementations
---
source### impl Debug for EnableSharingWithAwsOrganizationError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for EnableSharingWithAwsOrganizationError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for EnableSharingWithAwsOrganizationError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<EnableSharingWithAwsOrganizationError> for EnableSharingWithAwsOrganizationError
source#### fn eq(&self, other: &EnableSharingWithAwsOrganizationError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &EnableSharingWithAwsOrganizationError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for EnableSharingWithAwsOrganizationError
Auto Trait Implementations
---
### impl RefUnwindSafe for EnableSharingWithAwsOrganizationError
### impl Send for EnableSharingWithAwsOrganizationError
### impl Sync for EnableSharingWithAwsOrganizationError
### impl Unpin for EnableSharingWithAwsOrganizationError
### impl UnwindSafe for EnableSharingWithAwsOrganizationError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::GetPermissionError
===
```
pub enum GetPermissionError {
InvalidParameter(String),
MalformedArn(String),
OperationNotPermitted(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by GetPermission
Variants
---
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl GetPermissionError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<GetPermissionErrorTrait Implementations
---
source### impl Debug for GetPermissionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for GetPermissionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for GetPermissionError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<GetPermissionError> for GetPermissionError
source#### fn eq(&self, other: &GetPermissionError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetPermissionError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetPermissionError
Auto Trait Implementations
---
### impl RefUnwindSafe for GetPermissionError
### impl Send for GetPermissionError
### impl Sync for GetPermissionError
### impl Unpin for GetPermissionError
### impl UnwindSafe for GetPermissionError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::GetResourcePoliciesError
===
```
pub enum GetResourcePoliciesError {
InvalidNextToken(String),
InvalidParameter(String),
MalformedArn(String),
ResourceArnNotFound(String),
ServerInternal(String),
ServiceUnavailable(String),
}
```
Errors returned by GetResourcePolicies
Variants
---
### `InvalidNextToken(String)`
The specified value for NextToken is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `ResourceArnNotFound(String)`
An Amazon Resource Name (ARN) was not found.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
Implementations
---
source### impl GetResourcePoliciesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<GetResourcePoliciesErrorTrait Implementations
---
source### impl Debug for GetResourcePoliciesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for GetResourcePoliciesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for GetResourcePoliciesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<GetResourcePoliciesError> for GetResourcePoliciesError
source#### fn eq(&self, other: &GetResourcePoliciesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourcePoliciesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetResourcePoliciesError
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourcePoliciesError
### impl Send for GetResourcePoliciesError
### impl Sync for GetResourcePoliciesError
### impl Unpin for GetResourcePoliciesError
### impl UnwindSafe for GetResourcePoliciesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::GetResourceShareAssociationsError
===
```
pub enum GetResourceShareAssociationsError {
InvalidNextToken(String),
InvalidParameter(String),
MalformedArn(String),
OperationNotPermitted(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by GetResourceShareAssociations
Variants
---
### `InvalidNextToken(String)`
The specified value for NextToken is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl GetResourceShareAssociationsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<GetResourceShareAssociationsErrorTrait Implementations
---
source### impl Debug for GetResourceShareAssociationsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for GetResourceShareAssociationsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for GetResourceShareAssociationsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<GetResourceShareAssociationsError> for GetResourceShareAssociationsError
source#### fn eq(&self, other: &GetResourceShareAssociationsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourceShareAssociationsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetResourceShareAssociationsError
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourceShareAssociationsError
### impl Send for GetResourceShareAssociationsError
### impl Sync for GetResourceShareAssociationsError
### impl Unpin for GetResourceShareAssociationsError
### impl UnwindSafe for GetResourceShareAssociationsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::GetResourceShareInvitationsError
===
```
pub enum GetResourceShareInvitationsError {
InvalidMaxResults(String),
InvalidNextToken(String),
InvalidParameter(String),
MalformedArn(String),
ResourceShareInvitationArnNotFound(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by GetResourceShareInvitations
Variants
---
### `InvalidMaxResults(String)`
The specified value for MaxResults is not valid.
### `InvalidNextToken(String)`
The specified value for NextToken is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `ResourceShareInvitationArnNotFound(String)`
The Amazon Resource Name (ARN) for an invitation was not found.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl GetResourceShareInvitationsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<GetResourceShareInvitationsErrorTrait Implementations
---
source### impl Debug for GetResourceShareInvitationsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for GetResourceShareInvitationsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for GetResourceShareInvitationsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<GetResourceShareInvitationsError> for GetResourceShareInvitationsError
source#### fn eq(&self, other: &GetResourceShareInvitationsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourceShareInvitationsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetResourceShareInvitationsError
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourceShareInvitationsError
### impl Send for GetResourceShareInvitationsError
### impl Sync for GetResourceShareInvitationsError
### impl Unpin for GetResourceShareInvitationsError
### impl UnwindSafe for GetResourceShareInvitationsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::GetResourceSharesError
===
```
pub enum GetResourceSharesError {
InvalidNextToken(String),
InvalidParameter(String),
MalformedArn(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by GetResourceShares
Variants
---
### `InvalidNextToken(String)`
The specified value for NextToken is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl GetResourceSharesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<GetResourceSharesErrorTrait Implementations
---
source### impl Debug for GetResourceSharesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for GetResourceSharesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for GetResourceSharesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<GetResourceSharesError> for GetResourceSharesError
source#### fn eq(&self, other: &GetResourceSharesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourceSharesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetResourceSharesError
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourceSharesError
### impl Send for GetResourceSharesError
### impl Sync for GetResourceSharesError
### impl Unpin for GetResourceSharesError
### impl UnwindSafe for GetResourceSharesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::ListPendingInvitationResourcesError
===
```
pub enum ListPendingInvitationResourcesError {
InvalidNextToken(String),
InvalidParameter(String),
MalformedArn(String),
MissingRequiredParameter(String),
ResourceShareInvitationAlreadyRejected(String),
ResourceShareInvitationArnNotFound(String),
ResourceShareInvitationExpired(String),
ServerInternal(String),
ServiceUnavailable(String),
}
```
Errors returned by ListPendingInvitationResources
Variants
---
### `InvalidNextToken(String)`
The specified value for NextToken is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `MissingRequiredParameter(String)`
A required input parameter is missing.
### `ResourceShareInvitationAlreadyRejected(String)`
The invitation was already rejected.
### `ResourceShareInvitationArnNotFound(String)`
The Amazon Resource Name (ARN) for an invitation was not found.
### `ResourceShareInvitationExpired(String)`
The invitation is expired.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
Implementations
---
source### impl ListPendingInvitationResourcesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListPendingInvitationResourcesErrorTrait Implementations
---
source### impl Debug for ListPendingInvitationResourcesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListPendingInvitationResourcesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListPendingInvitationResourcesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListPendingInvitationResourcesError> for ListPendingInvitationResourcesError
source#### fn eq(&self, other: &ListPendingInvitationResourcesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListPendingInvitationResourcesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListPendingInvitationResourcesError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListPendingInvitationResourcesError
### impl Send for ListPendingInvitationResourcesError
### impl Sync for ListPendingInvitationResourcesError
### impl Unpin for ListPendingInvitationResourcesError
### impl UnwindSafe for ListPendingInvitationResourcesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::ListPermissionsError
===
```
pub enum ListPermissionsError {
InvalidNextToken(String),
InvalidParameter(String),
OperationNotPermitted(String),
ServerInternal(String),
ServiceUnavailable(String),
}
```
Errors returned by ListPermissions
Variants
---
### `InvalidNextToken(String)`
The specified value for NextToken is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
Implementations
---
source### impl ListPermissionsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListPermissionsErrorTrait Implementations
---
source### impl Debug for ListPermissionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListPermissionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListPermissionsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListPermissionsError> for ListPermissionsError
source#### fn eq(&self, other: &ListPermissionsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListPermissionsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListPermissionsError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListPermissionsError
### impl Send for ListPermissionsError
### impl Sync for ListPermissionsError
### impl Unpin for ListPermissionsError
### impl UnwindSafe for ListPermissionsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::ListPrincipalsError
===
```
pub enum ListPrincipalsError {
InvalidNextToken(String),
InvalidParameter(String),
MalformedArn(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by ListPrincipals
Variants
---
### `InvalidNextToken(String)`
The specified value for NextToken is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl ListPrincipalsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListPrincipalsErrorTrait Implementations
---
source### impl Debug for ListPrincipalsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListPrincipalsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListPrincipalsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListPrincipalsError> for ListPrincipalsError
source#### fn eq(&self, other: &ListPrincipalsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListPrincipalsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListPrincipalsError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListPrincipalsError
### impl Send for ListPrincipalsError
### impl Sync for ListPrincipalsError
### impl Unpin for ListPrincipalsError
### impl UnwindSafe for ListPrincipalsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::ListResourceSharePermissionsError
===
```
pub enum ListResourceSharePermissionsError {
InvalidNextToken(String),
InvalidParameter(String),
MalformedArn(String),
OperationNotPermitted(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by ListResourceSharePermissions
Variants
---
### `InvalidNextToken(String)`
The specified value for NextToken is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl ListResourceSharePermissionsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListResourceSharePermissionsErrorTrait Implementations
---
source### impl Debug for ListResourceSharePermissionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListResourceSharePermissionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListResourceSharePermissionsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListResourceSharePermissionsError> for ListResourceSharePermissionsError
source#### fn eq(&self, other: &ListResourceSharePermissionsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListResourceSharePermissionsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListResourceSharePermissionsError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListResourceSharePermissionsError
### impl Send for ListResourceSharePermissionsError
### impl Sync for ListResourceSharePermissionsError
### impl Unpin for ListResourceSharePermissionsError
### impl UnwindSafe for ListResourceSharePermissionsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::ListResourceTypesError
===
```
pub enum ListResourceTypesError {
InvalidNextToken(String),
InvalidParameter(String),
ServerInternal(String),
ServiceUnavailable(String),
}
```
Errors returned by ListResourceTypes
Variants
---
### `InvalidNextToken(String)`
The specified value for NextToken is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
Implementations
---
source### impl ListResourceTypesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListResourceTypesErrorTrait Implementations
---
source### impl Debug for ListResourceTypesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListResourceTypesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListResourceTypesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListResourceTypesError> for ListResourceTypesError
source#### fn eq(&self, other: &ListResourceTypesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListResourceTypesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListResourceTypesError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListResourceTypesError
### impl Send for ListResourceTypesError
### impl Sync for ListResourceTypesError
### impl Unpin for ListResourceTypesError
### impl UnwindSafe for ListResourceTypesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::ListResourcesError
===
```
pub enum ListResourcesError {
InvalidNextToken(String),
InvalidParameter(String),
InvalidResourceType(String),
MalformedArn(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by ListResources
Variants
---
### `InvalidNextToken(String)`
The specified value for NextToken is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `InvalidResourceType(String)`
The specified resource type is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl ListResourcesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListResourcesErrorTrait Implementations
---
source### impl Debug for ListResourcesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListResourcesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListResourcesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListResourcesError> for ListResourcesError
source#### fn eq(&self, other: &ListResourcesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListResourcesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListResourcesError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListResourcesError
### impl Send for ListResourcesError
### impl Sync for ListResourcesError
### impl Unpin for ListResourcesError
### impl UnwindSafe for ListResourcesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::PromoteResourceShareCreatedFromPolicyError
===
```
pub enum PromoteResourceShareCreatedFromPolicyError {
InvalidParameter(String),
MalformedArn(String),
MissingRequiredParameter(String),
OperationNotPermitted(String),
ResourceShareLimitExceeded(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by PromoteResourceShareCreatedFromPolicy
Variants
---
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `MissingRequiredParameter(String)`
A required input parameter is missing.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ResourceShareLimitExceeded(String)`
The requested resource share exceeds the limit for your account.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl PromoteResourceShareCreatedFromPolicyError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<PromoteResourceShareCreatedFromPolicyErrorTrait Implementations
---
source### impl Debug for PromoteResourceShareCreatedFromPolicyError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for PromoteResourceShareCreatedFromPolicyError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for PromoteResourceShareCreatedFromPolicyError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<PromoteResourceShareCreatedFromPolicyError> for PromoteResourceShareCreatedFromPolicyError
source#### fn eq(&self, other: &PromoteResourceShareCreatedFromPolicyError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PromoteResourceShareCreatedFromPolicyError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for PromoteResourceShareCreatedFromPolicyError
Auto Trait Implementations
---
### impl RefUnwindSafe for PromoteResourceShareCreatedFromPolicyError
### impl Send for PromoteResourceShareCreatedFromPolicyError
### impl Sync for PromoteResourceShareCreatedFromPolicyError
### impl Unpin for PromoteResourceShareCreatedFromPolicyError
### impl UnwindSafe for PromoteResourceShareCreatedFromPolicyError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::RejectResourceShareInvitationError
===
```
pub enum RejectResourceShareInvitationError {
IdempotentParameterMismatch(String),
InvalidClientToken(String),
MalformedArn(String),
OperationNotPermitted(String),
ResourceShareInvitationAlreadyAccepted(String),
ResourceShareInvitationAlreadyRejected(String),
ResourceShareInvitationArnNotFound(String),
ResourceShareInvitationExpired(String),
ServerInternal(String),
ServiceUnavailable(String),
}
```
Errors returned by RejectResourceShareInvitation
Variants
---
### `IdempotentParameterMismatch(String)`
A client token input parameter was reused with an operation, but at least one of the other input parameters is different from the previous call to the operation.
### `InvalidClientToken(String)`
A client token is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ResourceShareInvitationAlreadyAccepted(String)`
The invitation was already accepted.
### `ResourceShareInvitationAlreadyRejected(String)`
The invitation was already rejected.
### `ResourceShareInvitationArnNotFound(String)`
The Amazon Resource Name (ARN) for an invitation was not found.
### `ResourceShareInvitationExpired(String)`
The invitation is expired.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
Implementations
---
source### impl RejectResourceShareInvitationError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<RejectResourceShareInvitationErrorTrait Implementations
---
source### impl Debug for RejectResourceShareInvitationError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for RejectResourceShareInvitationError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for RejectResourceShareInvitationError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<RejectResourceShareInvitationError> for RejectResourceShareInvitationError
source#### fn eq(&self, other: &RejectResourceShareInvitationError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RejectResourceShareInvitationError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RejectResourceShareInvitationError
Auto Trait Implementations
---
### impl RefUnwindSafe for RejectResourceShareInvitationError
### impl Send for RejectResourceShareInvitationError
### impl Sync for RejectResourceShareInvitationError
### impl Unpin for RejectResourceShareInvitationError
### impl UnwindSafe for RejectResourceShareInvitationError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::TagResourceError
===
```
pub enum TagResourceError {
InvalidParameter(String),
MalformedArn(String),
ResourceArnNotFound(String),
ServerInternal(String),
ServiceUnavailable(String),
TagLimitExceeded(String),
TagPolicyViolation(String),
UnknownResource(String),
}
```
Errors returned by TagResource
Variants
---
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `ResourceArnNotFound(String)`
An Amazon Resource Name (ARN) was not found.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `TagLimitExceeded(String)`
The requested tags exceed the limit for your account.
### `TagPolicyViolation(String)`
The specified tag is a reserved word and cannot be used.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl TagResourceError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<TagResourceErrorTrait Implementations
---
source### impl Debug for TagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for TagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for TagResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<TagResourceError> for TagResourceError
source#### fn eq(&self, other: &TagResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TagResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for TagResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for TagResourceError
### impl Send for TagResourceError
### impl Sync for TagResourceError
### impl Unpin for TagResourceError
### impl UnwindSafe for TagResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::UntagResourceError
===
```
pub enum UntagResourceError {
InvalidParameter(String),
ServerInternal(String),
ServiceUnavailable(String),
}
```
Errors returned by UntagResource
Variants
---
### `InvalidParameter(String)`
A parameter is not valid.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
Implementations
---
source### impl UntagResourceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UntagResourceErrorTrait Implementations
---
source### impl Debug for UntagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UntagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UntagResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UntagResourceError> for UntagResourceError
source#### fn eq(&self, other: &UntagResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UntagResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UntagResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for UntagResourceError
### impl Send for UntagResourceError
### impl Sync for UntagResourceError
### impl Unpin for UntagResourceError
### impl UnwindSafe for UntagResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_ram::UpdateResourceShareError
===
```
pub enum UpdateResourceShareError {
IdempotentParameterMismatch(String),
InvalidClientToken(String),
InvalidParameter(String),
MalformedArn(String),
MissingRequiredParameter(String),
OperationNotPermitted(String),
ServerInternal(String),
ServiceUnavailable(String),
UnknownResource(String),
}
```
Errors returned by UpdateResourceShare
Variants
---
### `IdempotentParameterMismatch(String)`
A client token input parameter was reused with an operation, but at least one of the other input parameters is different from the previous call to the operation.
### `InvalidClientToken(String)`
A client token is not valid.
### `InvalidParameter(String)`
A parameter is not valid.
### `MalformedArn(String)`
The format of an Amazon Resource Name (ARN) is not valid.
### `MissingRequiredParameter(String)`
A required input parameter is missing.
### `OperationNotPermitted(String)`
The requested operation is not permitted.
### `ServerInternal(String)`
The service could not respond to the request due to an internal problem.
### `ServiceUnavailable(String)`
The service is not available.
### `UnknownResource(String)`
A specified resource was not found.
Implementations
---
source### impl UpdateResourceShareError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UpdateResourceShareErrorTrait Implementations
---
source### impl Debug for UpdateResourceShareError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateResourceShareError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateResourceShareError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateResourceShareError> for UpdateResourceShareError
source#### fn eq(&self, other: &UpdateResourceShareError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateResourceShareError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateResourceShareError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateResourceShareError
### impl Send for UpdateResourceShareError
### impl Sync for UpdateResourceShareError
### impl Unpin for UpdateResourceShareError
### impl UnwindSafe for UpdateResourceShareError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more |
@mehaotian/uni-ui | npm | JavaScript | ###
uni-ui产品特点
1. 高性能
目前为止,在小程序和混合app领域,暂时还没有比 `uni-ui` 更高性能的框架。
* 自动差量更新数据
虽然uni-app支持小程序自定义组件,所有小程序的ui库都可以用。但小程序自定义组件的ui库都需要使用setData手动更新数据,在大数据量时、或高频更新数据时,很容易产生性能问题。
而 `uni-ui` 属于vue组件,uni-app引擎底层自动diff更新数据。当然其实插件市场里众多vue组件都具备这个特点。
* 优化逻辑层和视图层通讯折损
非H5,不管是小程序还是App,不管是app的webview渲染还是原生渲染,全都是逻辑层和视图层分离的。这里就有一个逻辑层和视图层通讯的折损问题。
比如在视图层拖动一个可跟手的组件,由于通讯的损耗,用js监听很难做到实时跟手。
这时就需要使用css动画以及平台底层提供的wxs、bindingx等技术。不过这些技术都比较复杂,所以 `uni-ui` 里做了封装,在需要跟手式操作的ui组件,比如swiperaction列表项左滑菜单,就在底层使用了这些技术,实现了高性能的交互体验
* 背景停止
很多ui组件是会一直动的,比如轮播图、跑马灯。即便这个窗体被新窗体挡住,它在背景层仍然在消耗着硬件资源。在Android的webview版本为chrome66以上,背景操作ui会引发很严重的性能问题,造成前台界面明显卡顿。
而 `uni-ui` 的组件,会自动判断自己的显示状态,在组件不再可见时,不会再消耗硬件资源。
2. 全端
`uni-ui` 的组件都是多端自适应的,底层会抹平很多小程序平台的差异或bug。
比如导航栏navbar组件,会自动处理不同端的状态栏。
比如swiperaction组件,在app和微信小程序上会使用交互体验更好的wxs技术,但在不支持wxs的其他小程序端会使用js模拟类似效果。
`uni-ui` 还支持nvue原生渲染,[详见](https://github.com/dcloudio/uni-ui/tree/nvue-uni-ui)
未来 `uni-ui` 还会支持pc等大屏设备。
3. 与uni统计自动集成实现免打点
uni统计是优秀的多端统计平台,见[tongji.dcloud.net.cn](https://tongji.dcloud.net.cn)。
除了一张报表看全端,它的另一个重要特点是免打点。
比如使用 `uni-ui` 的navbar标题栏、收藏、购物车等组件,均可实现自动打点,统计页面标题等各种行为数据。
当然你也可以关闭uni统计,这不是强制的。
4. 主题扩展
`uni-ui` 支持[uni.scss](https://uniapp.dcloud.io/collocation/uni-scss),可以方便的切换App的风格。
ui是一种需求非常发散的产品,DCloud官方也无意用 `uni-ui` 压制第三方ui插件的空间,但官方有义务在性能和多端方面提供一个开源的标杆给大家。
我们欢迎更多优秀的ui组件出现,也欢迎更多人贡献 `uni-ui` 的主题风格,满足更多用户的需求。
###
uni-ui 使用说明
####
**方式一 (推荐)**
`HBuilderX 2.5.5`起支持 `easycom` 组件模式。在使用 `uni-ui` 的时候,只要[`uni-ui` 组件](https://ext.dcloud.net.cn/plugin?id=55) 安装在项目的 `components` 目录下,并符合 `omponents/组件名称/组件名称.vue` 目录结构。就可以不用引用、注册,直接在页面中使用 `uni-ui`
`easycom` 组件模式的好处在于不管 `components` 目录下安装了多少组件,`easycom` 打包后会自动剔除没有使用的组件,对组件库的使用尤为友好,组件库批量安装,随意使用,自动按需打包。 关于 `easycom` 更详细内容 [参考文档](https://uniapp.dcloud.io/collocation/pages?id=easycom)
####
**方式二(CLI)**
**初始化项目**
在 HBuilderX 中新建 uni-app 项目,进入项目目录,执行:
```
npm init -y
```
**安装 uni-ui**
```
npm install @dcloudio/uni-ui
```
在 `script` 中引用组件:
```
import {uniBadge} from '@dcloudio/uni-ui'
//import uniBadge from '@dcloudio/uni-ui/lib/uni-badge/uni-badge.vue' //也可使用此方式引入组件 export default {
components: {uniBadge}
}
```
在 `template` 中使用组件:
```
<uni-badge text="1"></uni-badge>
<uni-badge text="2" type="success" @click="bindClick"></uni-badge>
<uni-badge text="3" type="primary" :inverted="true"></uni-badge>
```
####
uni-ui 已支持的组件列表
| 组件名 | 引用路径 | 说明 |
| --- | --- | --- |
| uniBadge | '@dcloudio/uni-ui/lib/uni-badge/uni-badge.vue' | [数字角标](https://ext.dcloud.net.cn/plugin?id=21) |
| uniCalendar | '@dcloudio/uni-ui/lib/uni-calendar/uni-calendar.vue' | [日历](https://ext.dcloud.net.cn/plugin?id=56) |
| uniCard | '@dcloudio/uni-ui/lib/uni-card/uni-card.vue' | [卡片](https://ext.dcloud.net.cn/plugin?id=22) |
| uniCollapse | '@dcloudio/uni-ui/lib/uni-collapse/uni-collapse.vue' | [折叠面板](https://ext.dcloud.net.cn/plugin?id=23) |
| uniCombox | '@dcloudio/uni-ui/lib/uni-combox/uni-combox.vue' | [组合框](https://ext.dcloud.net.cn/plugin?id=1261) |
| uniCountdown | '@dcloudio/uni-ui/lib/uni-countdown/uni-countdown.vue' | [倒计时](https://ext.dcloud.net.cn/plugin?id=25) |
| uniDrawer | '@dcloudio/uni-ui/lib/uni-drawer/uni-drawer.vue' | [抽屉](https://ext.dcloud.net.cn/plugin?id=26) |
| uniFab | '@dcloudio/uni-ui/lib/uni-fab/uni-fab.vue' | [悬浮按钮](https://ext.dcloud.net.cn/plugin?id=144) |
| uniFav | '@dcloudio/uni-ui/lib/uni-fav/uni-fav.vue' | [收藏按钮](https://ext.dcloud.net.cn/plugin?id=864) |
| uniGoodsNav | '@dcloudio/uni-ui/lib/uni-goods-nav/uni-goods-nav.vue' | [商品导航](https://ext.dcloud.net.cn/plugin?id=865) |
| uniGrid | '@dcloudio/uni-ui/lib/uni-grid/uni-grid.vue' | [宫格](https://ext.dcloud.net.cn/plugin?id=27) |
| uniIcons | '@dcloudio/uni-ui/lib/uni-icons/uni-icons.vue' | [图标](https://ext.dcloud.net.cn/plugin?id=28) |
| uniIndexedList | '@dcloudio/uni-ui/lib/uni-indexed-list/uni-indexed-list.vue' | [索引列表](https://ext.dcloud.net.cn/plugin?id=375) |
| uniList | '@dcloudio/uni-ui/lib/uni-list/uni-list.vue' | [列表](https://ext.dcloud.net.cn/plugin?id=24) |
| uniLoadMore | '@dcloudio/uni-ui/lib/uni-load-more/uni-load-more.vue' | [加载更多](https://ext.dcloud.net.cn/plugin?id=29) |
| uniNavBar | '@dcloudio/uni-ui/lib/uni-nav-bar/uni-nav-bar.vue' | [自定义导航栏](https://ext.dcloud.net.cn/plugin?id=52) |
| uniNoticeBar | '@dcloudio/uni-ui/lib/uni-notice-bar/uni-notice-bar.vue' | [通告栏](https://ext.dcloud.net.cn/plugin?id=30) |
| uniNumberBox | '@dcloudio/uni-ui/lib/uni-number-box/uni-number-box.vue' | [数字输入框](https://ext.dcloud.net.cn/plugin?id=31) |
| uniPagination | '@dcloudio/uni-ui/lib/uni-pagination/uni-pagination.vue' | [分页器](https://ext.dcloud.net.cn/plugin?id=32) |
| uniPopup | '@dcloudio/uni-ui/lib/uni-popup/uni-popup.vue' | [弹出层](https://ext.dcloud.net.cn/plugin?id=329) |
| uniRate | '@dcloudio/uni-ui/lib/uni-rate/uni-rate.vue' | [评分](https://ext.dcloud.net.cn/plugin?id=33) |
| uniSearchBar | '@dcloudio/uni-ui/lib/uni-search-bar/uni-search-bar.vue' | [搜索栏](https://ext.dcloud.net.cn/plugin?id=866) |
| uniSegmentedControl | '@dcloudio/uni-ui/lib/uni-segmented-control/uni-segmented-control.vue' | [分段器](https://ext.dcloud.net.cn/plugin?id=54) |
| uniSteps | '@dcloudio/uni-ui/lib/uni-steps/uni-steps.vue' | [步骤条](https://ext.dcloud.net.cn/plugin?id=34) |
| uniSwipeAction | '@dcloudio/uni-ui/lib/uni-swipe-action/uni-swipe-action.vue' | [滑动操作](https://ext.dcloud.net.cn/plugin?id=181) |
| uniSwiperDot | '@dcloudio/uni-ui/lib/uni-swiper-dot/uni-swiper-dot.vue' | [轮播图指示点](https://ext.dcloud.net.cn/plugin?id=284) |
| uniTag | '@dcloudio/uni-ui/lib/uni-tag/uni-tag.vue' | [标签](https://ext.dcloud.net.cn/plugin?id=35) |
| uniTitle | '@dcloudio/uni-ui/lib/uni-title/uni-title.vue' | [章节标题](https://ext.dcloud.net.cn/plugin?id=1066) |
| uniTransition | '@dcloudio/uni-ui/lib/uni-transition/uni-transition.vue' | [过渡动画](https://ext.dcloud.net.cn/plugin?id=985) |
####
其他
* uni-ui 是全端兼容的基于flex布局的、无dom的ui库
* uni-ui 是uni-app内置组件的扩展。注意与web开发不同,uni-ui不包括基础组件,它是基础组件的补充。web开发中有的开发者习惯用一个ui库完成所有开发,但在uni-app体系中,推荐开发者首先使用性能更高的基础组件,然后按需引入必要的扩展组件。
* uni-ui 不支持使用 Vue.use() 的方式安装
* uni-ui 依赖 scss,若是 HBuilderX 中创建的 uni-app 项目,需要在 HBuilderX 中安装 scss 插件;如果是使用 cli 创建的 uni-app 项目,需要在项目下npm安装 node-sass 和 sass-loader
* `CLI` 引用方式 `H5` 端不支持在 `main.js` 中全局注册组件,如有需求请使用 [easyCom](https://uniapp.dcloud.io/collocation/pages?id=easycom) 的方式引用组件
###
贡献代码
在使用 `uni-ui` 中,如遇到无法解决的问题,请提 [Issues](https://github.com/dcloudio/uni-ui/issues) 给我们,假如您有更好的点子或更好的实现方式,也欢迎给我们提交 [PR](https://github.com/dcloudio/uni-ui/pulls)
Readme
---
### Keywords
none |
jupyterlab-server | readthedoc | Unknown | Date: 2021-03-30
Categories:
Tags:
## 2.9.0¶
Enforce labels on PRs #221 (@blink1073)
*
Fix caching of conda env in CI #218 (@blink1073)
*
`pyproject.toml` : clarify build system version #222 (@adamjstewart)
Fix typo in README #216 (@Mithil467)
*
Clean up Readme #215 (@blink1073)
@adamjstewart | @blink1073 | @bollwyvl | @codecov-commenter | @minrk | @Mithil467 | @welcome
## 2.8.2¶
Fallback to context-less translation on Python 3.7 #213 (@krassowski)
*
Clean unneeded catch #210 (@fcollonval)
## 2.8.1¶
Fall back to
`DEFAULT_LOCALE` when translation settings schema is invalid in `get_current_locale` #207 (@telamonian)
## 2.8.0¶
Translate settings schema #205 (@fcollonval)
## 2.7.2¶
Do not overwrite capitalization of region names #202 (@krassowski)
Use Check Links Action #201 (@blink1073)
Recommend
```
pytest --pyargs jupyterlab_server
```
#203 (@krassowski)
## 2.7.1¶
Fix reset user settings if validation failed #199 (@fcollonval)
TST: support openapi-core 0.14 SpecPath #198 (@bnavigator)
## 2.7.0¶
Switch to entrypoints package #187 (@fcollonval)
## 2.6.2¶
## 2.6.1¶
## 2.6.0¶
Add changelog for 2.6.0 #185 (@blink1073)
## 2.5.0¶
### Other merged PRs¶
@afshin | @blink1073 | @jtpio | @mlucool | @welcome
## 2.4.0¶
### Merged PRs¶
Fill in missing changelog entries #164 (@blink1073)
*
Improve documentation: Instructions for development and test setups #130 (@ZelphirKaltstahl)
## 2.2.1¶
Alleviate invalid locale issue by only setting it if language pack exists #159 (@krassowski)
## 2.2.0¶
Add URL and description for federated extension data #154 (@krassowski)
## 2.1.1¶
Move open_browser to ProcessApp class #149 (@jasongrout)
## 2.0.0¶
Update jupyter_server requirement to 1.* instead of 1.1.* #144 (@jasongrout)
*
Update for new jupyter_server pytest plugin interface #141 (@afshin)
*
Fix var name typo from prev commit #139 (@ajbozarth)
*
Remove out of date LICENSE reference to json_minify #135 (@afshin)
*
Support a metadata file in the lab extension directory #132 (@jasongrout)
*
Fixed bug where disabled extensions still ran #131 (@ajbozarth)
*
Add handling of incomplete dynamic extension data #124 (@blink1073)
*
Clean up handling of config #123 (@blink1073)
*
Allow default setting overrides to be configurable in jupyter config #122 (@blink1073)
*
Cache lab extension assets by default #121 (@jasongrout)
*
Added support for fullStaticUrl and fix handling of base_url #120 (@Zsailer)
*
Update minimum python version to 3.6 #119 (@jasongrout)
*
Use tilde for server requirement #116 (@blink1073)
*
Change payload parsing to accept JSON5 as raw string in JSON payload #114 (@blink1073)
*
Use blocked/allowed extensions naming in JupyterLab Server #111 (@echarles)
*
Fix open_browser for process apps #110 (@blink1073)
*
Add handling of dynamic labextensions #109 (@blink1073)
*
UTF-8 all over, test with big unicode string #108 (@bollwyvl)
*
fix file mtime/ctime confusion, restore win3.8, patch event loop #107 (@bollwyvl)
*
Changes needed for improved single document mode #101 (@ellisonbg)
*
Add last_modified and created to settings and workspace API items #99 (@bollwyvl)
## 1.1.5¶
Always wait for process to finish #93 (@blink1073)
*
ensure the ‘WHICH’ command returns absolute path instead of relative path #72 (@tgrout)
## v1.1.4¶
## v1.1.2¶
## v1.1.1¶
## v1.0.9¶
## v1.0.8¶
## v1.0.7¶
Fix URL prefixing for absolute URLs #81 (@santiagobasulto)
## v1.0.6¶
Add .json.orig files to sdists #78 (@toddrme2178)
## v1.0.5¶
Require jinja2 2.10+ to fix extra escaping #77 (@blink1073)
## v1.0.4¶
Use escape instead of urlencode for urls #76 (@blink1073)
## v1.0.3¶
Escape template values in Jinja #75 (@jasongrout)
## v1.0.2¶
Add store ID to page config #74 (@saulshanabrook)
## v1.0.1¶
fix page config escaping #73 (@nicorikken)
## v1.0.0¶
Use json5 to load settings files. #71 (@ian-r-rose)
*
Cleanup for 1.0 #70 (@blink1073)
*
A 403.html file. Should close jupyterlab issue#6065 #69 (@zerline)
## v0.13.1¶
Cleanup #56 (@blink1073)
## v0.13.0¶
Switch to Python 3.5+ #55 (@blink1073)
*
Switch to jupyter_server #49 (@SylvainCorlay)
## v0.11.0¶
Include LICENSE file in wheels #45 (@toddrme2178)
*
move process utilities from jupyterlab to jupyterlab_launcher #44 (@ivanov)
*
Update test for tornado 5 #43 (@blink1073)
## v0.9.1¶
Fix path handling on Windows #33 (@blink1073)
## v0.8.0¶
Add a theme handler #32 (@blink1073)
## v0.7.0¶
## v0.6.0¶
Add error message option #26 (@blink1073)
*
Update settings handler to support JSON with comments. #25 (@afshin)
## v0.5.5¶
Add console error on error page #24 (@blink1073)
## v0.5.4¶
Clean up static dir handling #23 (@blink1073)
## v0.5.3¶
Do not cache themes #22 (@blink1073)
## v0.5.2¶
Cleanup and handle no assets dir #20 (@blink1073)
## v0.5.1¶
Fix handling of user settings dir #21 (@blink1073)
## v0.4.2¶
Remove trailing slash to fix theme relative urls #18 (@blink1073)
## v0.4.1¶
Do not cache the static files #17 (@blink1073)
## v0.4.0¶
Theme and settings #16 (@blink1073)
## v0.3.0¶
Switch to a static namespace for webpack files #13 (@blink1073)
## v0.2.9¶
We do not need to encode server provided urls #12 (@blink1073)
*
Add simple travis file #11 (@blink1073)
## v0.2.8¶
Fix addition of public path handler #9 (@blink1073)
*
Add backwards compatibility for public url #8 (@blink1073)
## v0.2.7¶
Fix handling of page config #7 (@blink1073)
## v0.2.6¶
Fix handling of base and ws urls #6 (@blink1073)
## v0.2.1¶
Fix handling of data #4 (@blink1073)
## v0.2.0¶
Clean up config handling #3 (@blink1073)
# JupyterLab Server API¶
Date:
Categories:
Tags:
JupyterLab Server
JupyterLab Server API Reference:
previous
Change Log
next
Config file and command line options
|
interim | cran | R | Package ‘interim’
October 13, 2022
Title Scheduling Interim Analyses in Clinical Trials
Version 0.8.0
Author <NAME>, <NAME>, <NAME>
Maintainer <NAME> <<EMAIL>>
Description Allows the simulation of the recruitment and both the event and treatment phase of a clin-
ical trial. Based on these simulations, the timing of interim analyses can be assessed.
Depends R (>= 3.3)
License GPL
Encoding UTF-8
LazyData true
RoxygenNote 6.1.1
NeedsCompilation no
Repository CRAN
Date/Publication 2019-06-24 08:30:03 UTC
R topics documented:
interim-packag... 2
capacit... 3
convertedRat... 4
cros... 5
even... 6
eventCours... 7
eventWee... 8
recruitmen... 9
treatmen... 10
trialCours... 11
trialWee... 12
interim-package Scheduling interim analyses in clinical trials
Description
It is often discussed during the planning of a clinical trial whether an interim analysis is beneficial.
The time point of the interim analysis and the end of the clinical trial are crucial for the decision.
Both depend on the recruitment of patients and on the length of the treatment phase. The package
interim allows the instantaneous simulation and plotting of both the recruitment and treatment
phase. Based on these simulations, the timing of interim analyses can be assessed or different
recruitment scenarios can be compared.
Details
There are three main functions in this package:
• recruitment
• treatment
• trialCourse
The function recruitment generally is the starting point. It simulates screening and enrollment
based on screening characteristics, like e.g. number of available centers and patients, screen failure
rate, etc.
The function treatment simulates the treatment period based on a given recruitment scenario as
simulated by recruitment.
The function trialCourse plots displays of enrollment and treatment simulations. Two plots are
provided; the first one displays center openings and the second one displays patient screening,
enrollment and treatment.
In addition to the main functions, the package comprises a number of auxilliary functions helping
to derive or convert parameters required for the three main functions, as well as to derive and plot
information on timing of interim analyses. The following auxilliary functions are available:
• capacity
• convertedRate
• trialWeek
• cross
capacity Scheduling interim analyses in clinical trials
Description
Function capacity calculates the maximum number of enrollments for a recruitment in scenario 4.
Usage
capacity(nc, ns, sf)
Arguments
nc maximum number of centers to be opened.
ns maximum number of patients to be screened within each center.
sf screening failure rate.
Details
capacity is an auxilliary function to determine the maximum number of patients that can be en-
rolled in the scenario where only a limited number of centers is available and each center only has
a limited number of patients that can be enrolled.
Value
The maximum number of patients that can be enrolled.
See Also
recruitment for simulating recruitment scenarios
Examples
mE=capacity(nc=40,ns=10,sf=0.3)
recruitment(nc=40,ns=10,cw=4,sw=2,sf=0.3,tb=4,en=mE)
convertedRate Scheduling interim analyses in clinical trials
Description
Function convertedRate converts a drop-out rate from one period to another. If the drop-out rate
is defined for w1 weeks convertedRate yields the drop-out rate for w2 weeks.
Usage
convertedRate(r, w1, w2)
Arguments
r rate between 0 and 1 (0 < r < 1)
w1 number of weeks for which r is defined
w2 number of weeks to which the rate shall be converted
Details
convertedRate is an auxilliary function that converts drop-out rates for different time periods.
The function can be used in order to convert drop-out rates required for function treatment. Func-
tion treatment requires the drop-out rate for the respective treatment duration as input. Typically
known annual drop-out rates can be converted to the respective rate for the treatment duration ac-
cordingly by setting w1 to 52 and w2 to the respective treatment duration.
Value
The converted drop-out rate.
See Also
treatment for simulating the treatment duration for a given recruitment scenario
Examples
convertedRate(r=0.3,w1=52,w2=26)
cross Scheduling interim analyses in clinical trials
Description
Function cross plots two cossing lines into the patients diagram .
Usage
cross(w, p)
Arguments
w week where the vertical line is plotted.
p number of patients where the horizontal line is plotted.
Details
This function includes a vertical and horizontal line into an existing patient diagram produced by
function trialCourse or into an existing event diagram produced by function evenCourse.
The lines are to mark the timepoint w in weeks at which a required number of patients or events
p has finished their treatment or occured as first events, respectivley. The display can be used to
assess the scheduling of interim analyses.
The auxilliary functions trialWeek or eventWeek can be used to derive the week of the trial in
which the required number of patients has finished the treatment or events occured.
See Also
trialCourse for plots of recruitment and treatment scenarios; trialWeek for deriving the week of
a trial at which a certain number of patients finished treatment. eventCourse for plots of recruit-
ment and event scenarios; eventWeek for deriving the week of a trial at which a certain number of
event occured.
Examples
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
y=treatment(r=x,du=26,dr=convertedRate(0.3,52,26))
z=treatment(r=x,du=52,dr=0.3)
trialCourse(r=x,t1=y,t2=z)
trialWeek(t=y,p=100)
cross(w=trialWeek(t=y,p=100),p=100)
event Scheduling interim analyses in clinical trials for time-to-event settings
Description
Function event simulates the events base on a recruitment scenario simulated by function recruitment.
Usage
event(r, er, dr, du)
Arguments
r recruitment scenario calculated with function recruitment.
er event rate during the clinical trail.
dr drop-out rate during the clinical trail.
du duration of the clinical trail in weeks.
Details
event simulates the events based on a given recruitment scenario. The function assumes an expo-
nential distribution for the event probability with a common event rate for all subjects (er). The
drop-out rate may be included. For the probability of an drop-out, treatment assumes an ex-
ponential distribution with a common rate dr. It is assumed that the even and drop-out time are
independent of each other.
Value
• event returns a list of vectors with the following components:
• events a vector with the (cumulative) number of frist events (events before drop-out)
• drops a vector with the (cumulative) number of patients who droped out of the trail before the
first event
• weeksOfEvent a vector with the corresponding trial week when patients have experienced the
first event (with start of site openings as reference start)
See Also
recruitment for simulating recruitment scenarios; eventCourse for plots of recruitment and event
scenarios;
Examples
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
y=event(r=x,er=0.12,dr=0.08,du=50)
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
y=event(r=x,er=0.12,dr=0.08,du=50)
eventCourse Scheduling interim analyses in clinical trials for time-to-event settings
Description
Function eventCourse plots the results of function recruitment and function event.
Usage
eventCourse(r, e1 = NULL, lp = "topright")
Arguments
r recruitment scenario calculated with function recruitment.
e1 optional. Event simulation from function event.
lp optional. Position of legend, specified by keyword: "bottomright", "bottom",
"bottomleft", "left", "topleft", "top", "topright", "right", or "center".
Details
Function eventCourse produces two plots to display results of enrollment and treatment simula-
tions.
The first plot displays the cumulative number of centers that have been opened as well as the cumu-
lative number of centers that have been closed, if applicable, per trial week.
The second plot displays the number of patients that have been screened and enrolled per trial week.
If the parameter e1 is not NULL, then the number of events and the number of drop-outs before first
event ist displayed.
See Also
event for simulating the events for a given recruitment scenario; recruitment for simulating re-
cruitment scenarios.
Examples
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
y=event(r=x,er=0.12,dr=0.08,du=50)
eventCourse(r=x,e1=y)
eventWeek Scheduling interim analyses in clinical trials in a time-to-event setting
Description
Function eventWeek determines the week of the trial in which a certain number t of events occured.
Usage
eventWeek(t, p)
Arguments
t result of function event.
p number of events for which the week shall be determined.
Details
eventWeek is an auxilliary function required to assess the timing of interim analyses. It derives the
week of trial in which a certain number of events occured.
The output is required for function cross, which includes the information into an existing Event
diagram.
Value
The week in which the number of events is reached.
See Also
cross for plotting results of function eventWeek into an existing Event diagram.
Examples
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
y=treatment(r=x,dr=0.08,du=50)
eventCourse(r=x,e1=y)
trialWeek(t=y,p=50)
recruitment Scheduling interim analyses in clinical trials
Description
Function recruitment simulates the recruitment of patients in clinical trials.
Usage
recruitment(nc, ns, cw, sw, sf, tb, en)
Arguments
nc maximum number of centers to be opened or Inf.
ns maximum number of patients to be screened within each center or Inf.
cw number of center openings per week.
sw number of screened patients per week within each center.
sf screening failure rate.
tb time between screening and enrollment/randomization in weeks.
en number of patients to be enrolled.
Details
Function recruitment simulates the recruitment progress for a required number of enrolled pa-
tients in clinical trials based on the expected number of centers to be opened per week and the
expected number of patients being recruited per site and week. The function assumes that centers
are being opened at a constant rate per week (cw) and patient per center are screened at a constant
rate per week (sw).
The function can handle recruitment limits by limiting the total number of centers (nc) and/or the
number of patients recruitable per site (ns).
The function discriminates between screening timepoint and enrollment/randomization timepoint
by allowing a screening period (tb) and screen failure rate (sf) to be specified. If both are zero then
patients are directly enrolled at screening.
Function recruitment can handle four different recruitment scenarios.
• Scenario 1. Centers are being opened over the entire trial duration, i.e. no limit of centers
(nc=Inf) and are kept open during the complete trial duration, i.e. no limit of patients per
center (ns=Inf).
• Scenario 2. Only a limited number of centers can be opened (nc= a positive number) and are
kept open during the complete trial duration (ns=Inf).
• Scenario 3. Centers are being opened over the entire trial duration (nc=Ind) and are only have
a limited capacity for patient recruitment (ns= a positive number).
• Scenario 4. Only a limited number of centers can be opened (nc= a positive number) and are
only have a limited capacity for patient recruitment (ns= a positive number).
Under scenario 4 only a limited number of patients can be recruited. The auxilliary function
capacity can be used to derive the maximum number of patients that can be enrolled under this
scenario.
Results of recruitment are required as input for function treatment to derive the time when
treatment of patients is finished.
Value
recruitment returns a list of vectors with the following components:
• weeksOfTrial a vector counting the trial week for recruitment(from 0 (start of site openings)
to the required number of weeks for recruitment)
• openCenters a vector with the (cumulative) number of opened centers per trial week
• closedCenters a vector with the (cumulative) number of closed centers per trial week
• centerWeeks a vector with the (cumulative) number of opened center weeks per trial week
• screenings a vector with the (cumulative) number of screened patients per trial week
• weeksOfEnrollment a vector counting the weeks of enrollment (with start of site openings as
reference start)
• enrollments a vector with the (cumulative) number of enrolled/randomized patients per week
of enrollment
See Also
treatment for simulating the treatment duration for a given recruitment scenario; trialCourse
for plots of recruitment and treatment scenarios; capacity for deriving the maximum number of
patients that can be enrolled under scenario 4;
Examples
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
treatment Scheduling interim analyses in clinical trials
Description
Function treatment simulates the treatment phase base on a recruitment scenario simulated by
function recruitment.
Usage
treatment(r, du, dr)
Arguments
r recruitment scenario calculated with function recruitment.
du duration of treatment phase in weeks.
dr drop-out rate during the treatment phase.
Details
treatment simulates the treatment period based on a given recruitment scenario. The function as-
sumes a common fixed treatment length for all subjects (du). The drop-out rate may be included
via dr. If drop-out rates are available only for different time periods, e.g. annual rates, function
convertedRate can be used to convert the rate to a drop-out rate for the respective treatment dura-
tion.
Value
• treatment returns a list of vectors with the following components:
• patients a vector with the (cumulative) number of patients who finished treatment
• weeksOfTrial a vector with the corresponding trial week when patients have finished treat-
ment (with start of site openings as reference start)
See Also
recruitment for simulating recruitment scenarios; trialCourse for plots of recruitment and treat-
ment scenarios; convertedRate for converting drop-out rates for different time periods.
Examples
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
y=treatment(r=x,du=26,dr=0.2)
z=treatment(r=x,du=52,dr=0.2)
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
y=treatment(r=x,du=26,dr=convertedRate(0.3,52,26))
z=treatment(r=x,du=52,dr=0.3)
trialCourse Scheduling interim analyses in clinical trials
Description
Function trialCourse plots the results of function recruitment and function treatment.
Usage
trialCourse(r, t1 = NULL, t2 = NULL, lp = "topright")
Arguments
r recruitment scenario calculated with function recruitment.
t1 optional. Treatment phase simulation from function treatment.
t2 optional. Treatment phase simulation from function treatment.
lp optional. Position of legend, specified by keyword: "bottomright", "bottom",
"bottomleft", "left", "topleft", "top", "topright", "right", or "center".
Details
Function trialCourse produces two plots to display results of enrollment and treatment simula-
tions.
The first plot displays the cumulative number of centers that have been opened as well as the cumu-
lative number of centers that have been closed, if applicable, per trial week.
The second plot displays the number of patients that have been screened and enrolled per trial
week. If at least one of the parameters t1 and t2 are not NULL, then the number of patients finished
treatment is also displayed.
It is possible to include two different treatment scenarios into one plot. This option may for example
be used to assess the timing for specific interim analyses, i.e. t1 is used to assess when the required
number of patients for the interim analysis finished treatment while t2 is used to assess when the
required number of patients for the final analysis finished treatment.
See Also
treatment for simulating the treatment duration for a given recruitment scenario; recruitment for
simulating recruitment scenarios.
Examples
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
y=treatment(r=x,du=26,dr=convertedRate(0.3,52,26))
z=treatment(r=x,du=52,dr=0.3)
trialCourse(r=x,t1=y,t2=z)
trialWeek Scheduling interim analyses in clinical trials
Description
Function trialWeek determines the week of the trial in which a certain number p of patients fin-
ished treatment.
Usage
trialWeek(t, p)
Arguments
t result of function treatment.
p number of patients for which the week shall be determined.
Details
trialWeek is an auxilliary function required to assess the timing of interim analyses. It derives the
week of trial in which a certain number of patients finished treatment.
The output is required for function cross, which includes the information into an existing Patients
diagram.
Value
The week in which the number of patients is reached.
See Also
cross for plotting results of function trialWeek into an existing Patients diagram.
Examples
x=recruitment(nc=Inf,ns=Inf,cw=4,sw=2,sf=0.3,tb=4,en=400)
y=treatment(r=x,du=26,dr=convertedRate(0.3,52,26))
z=treatment(r=x,du=52,dr=0.3)
trialCourse(r=x,t1=y,t2=z)
trialWeek(t=y,p=100) |
WorldFlora | cran | R | Package ‘WorldFlora’
May 26, 2023
Type Package
Title Standardize Plant Names According to World Flora Online
Taxonomic Backbone
Version 1.13-2
Date 2023-5-26
Author <NAME> [cre, aut] (<https://orcid.org/0000-0002-7672-0712>)
Maintainer <NAME> <<EMAIL>>
Description World Flora Online is an online flora of all known plants, avail-
able from <http://www.worldfloraonline.org/>. Methods are provided of match-
ing a list of plant names (scientific names, taxonomic names, botani-
cal names) against a static copy of the World Flora Online Taxonomic Back-
bone data that can be downloaded from the World Flora Online website. The World Flora On-
line Taxonomic Backbone is an updated ver-
sion of The Plant List (<http://www.theplantlist.org/>), a work-
ing list of plant names that has become static since 2013.
License GPL-3
Depends R (>= 3.5.0)
Suggests data.table, utils, stringr, dplyr, fuzzyjoin, stringdist
NeedsCompilation no
Repository CRAN
Date/Publication 2023-05-26 07:20:02 UTC
R topics documented:
new.backbon... 2
vascular.familie... 4
WFO.acceptable.matc... 5
WFO.exampl... 6
WFO.matc... 7
WFO.match.fuzzyjoi... 13
WFO.prepar... 16
WFO.remembe... 18
new.backbone Develop a User-created Taxonomic Backbone data set
Description
Instead of using the taxonomic backbone data set from World Flora Online, it is possible to use
matching functions of WorldFlora with alternative taxonomic backbone. The function creates new
variables that correspond to key variables in the World Flora Online backbone so that matching
functions WFO.match and WFO.one can be applied.
Usage
new.backbone(x,
taxonID = "taxonID", scientificName = "scientificName",
scientificNameAuthorship = "scientificNameAuthorship",
acceptedNameUsageID = NULL, taxonomicStatus = NULL
)
Arguments
x data.frame with the variables.
taxonID name of the variable with the identification
scientificName name of the variable with the full taxon name
scientificNameAuthorship
name of the variable with the naming authors
acceptedNameUsageID
ID of the record with the current (accepted) name. Should respond to an ID in
the ’taxonID’ column. In case the taxonomic name is current, then this field
should be left blank. This field is used by function WFO.match to find the
accepted name of a species.
taxonomicStatus
Variable that indicates whether the record is for a current name or a synonym.
This variable is used by function WFO.one to discriminate situations where best
matches include matches with current names and synonyms.
Details
This function allows a user to create a new taxonomic backbone data set that is understood by
WFO.match and WFO.one.
Alternative examples with the Mammal Diversity Database (https://www.mammaldiversity.
org/) and the World Checlist of Vascular Plants (https://powo.science.kew.org/about-wcvp)
are provided in the Kindt 2021a,b RPubs.
Value
The function returns a data.table that can be understood by WFO.match and WFO.one for standard-
izing taxonomic names.
Author(s)
<NAME> (World Agroforestry)
References
Kindt, R. 2021a. Standardizing mammal species names with the Mammal Species Database via ex-
act and fuzzy matching functions from the WorldFlora package. https://rpubs.com/Roeland-KINDT
Kindt, R. 2021b. Standardizing GlobalTreeSearch tree species names with World Flora Online and
the World Checklist of Vascular Plants https://rpubs.com/Roeland-KINDT
See Also
WFO.match, WFO.one
Examples
## Not run:
# load the World Flora Online taxonomic backbone
WFO.remember()
# get a list of Sapotaceae species
Sapotaceae <- WFO.data[WFO.data$family == "Sapotaceae",]
Sapotaceae <- Sapotaceae[Sapotaceae$taxonRank == "SPECIES", ]
Sapotaceae <- Sapotaceae[Sapotaceae$taxonomicStatus == "Accepted", ]
Sapotaceae <- Sapotaceae[, c("scientificName", "scientificNameAuthorship")]
Sapotaceae <- data.frame(ID = c(1:nrow(Sapotaceae)), Sapotaceae)
names(Sapotaceae)[2:3] <- c("species.name", "author")
head(Sapotaceae)
# create a new backbone from the GlobalTreeSearch database,
# after copying locally from https://tools.bgci.org/global_tree_search.php
GTS.dir <- "E://Roeland//R///World Flora Online//2021"
GTS <- read.csv(paste0(GTS.dir, "//global_tree_search.csv"))
GTS <- GTS[, 1:2]
GTS <- data.frame(GTS.ID = paste0("GTS-", c(1:nrow(GTS))), GTS)
nrow(GTS)
# create the new backbone
GTS.data <- new.backbone(GTS,
taxonID="GTS.ID",
scientificName="TaxonName",
scientificNameAuthorship="Author")
head(GTS.data)
# Check and standardize Sapotaceae
Sapotaceae.match <- WFO.one(WFO.match(Sapotaceae,
WFO.data = GTS.data,
spec.name = "species.name",
Authorship = "author"))
nrow(Sapotaceae.match[Sapotaceae.match$Fuzzy == FALSE, ] )
nrow(Sapotaceae.match[Sapotaceae.match$Fuzzy == TRUE &
Sapotaceae.match$Fuzzy.dist < 4, ] )
Sapotaceae.match[Sapotaceae.match$Fuzzy == TRUE &
Sapotaceae.match$Fuzzy.dist < 4,
c("ID", "species.name", "Fuzzy.dist", "scientificName")]
## End(Not run)
vascular.families Orders and Higher Level Classifications of Vascular Plants
Description
This data set lists orders for families of vascular plants (angiosperms, gymnosperms and pteri-
dophytes). For angiosperms, information from orders and higher levels of classification corre-
spond to the fourth update of the Angiosperm Phylogeny Group (APG IV, doi:10.1111/boj.12385).
Higher levels of classification correspond to names of nodes of the consensus tree (Figure 1 in
doi:10.1111/boj.12385). Orders for gymnosperms and pteridophytes were obtained from http:
//www.mobot.org/MOBOT/research/APweb/.
Usage
data(vascular.families)
Format
A data frame with 476 observations on the following 10 variables.
Group Group.
Family.ID Unique ID for each family. For angiosperms, these correspond to APG IV.
Family Name of the plant family.
Family.taxonID taxonID retrieved from World Flora Online.
Order Name of the plant order.
Order.taxonID taxonID retrieved from World Flora Online.
Node.1 Name of the node in the consensus tree.
Node.2 Name of the node in the consensus tree, with Node.2 nested within Node.1.
Node.3 Name of the node in the consensus tree, with Node.3 nested within Node.2.
Node.4 Name of the node in the consensus tree, with Node.4 nested within Node.3.
References
The Angiosperm Phylogeny Group, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, An update of the
Angiosperm Phylogeny Group classification for the orders and families of flowering plants: APG
IV, Botanical Journal of the Linnean Society 181: 1-20. doi:10.1111/boj.12385
Examples
data(vascular.families)
WFO.acceptable.match Check for fuzzy matches that can be acceptable based on gender nota-
tions
Description
The function checks whether submitted and match names only differ by ending by -um, -us or -
a. An extra check is done to accept differences that result from having ’ii’ instead of ’i’ in the
submitted and matched name. An optional check ignores differences in vowels.
Usage
WFO.acceptable.match(x, spec.name="spec.name",
no.vowels=FALSE)
Arguments
x Output for WFO.match, WFO.match.fuzzyjoin or WFO.match.one.
spec.name Name of taxon submitted for matching.
no.vowels Accept results if only vowels differ between submitted and matched name.
Details
The function was especially developed to check for changes in gender notations.
Value
The function returns a logical vector that indicates whether names could be acceptable.
Author(s)
<NAME> (World Agroforestry)
Examples
data(WFO.example)
spec.test <- data.frame(spec.name=c("Faidherbia albida", "Acacia albida",
"Faidherbia albidum", "Faidherbia albidus",
"Faidherbia albiida",
"Prunus africanus", "Prunos africanea",
"Prunus afrocaneus", "Prunus afrocaneos"))
match1 <- WFO.match.fuzzyjoin(spec.data=spec.test, WFO.data=WFO.example,
fuzzydist.max = 6)
match1[, c("spec.name", "scientificName")]
# check for gender differences (and ii - i)
WFO.acceptable.match(match1)
# ignore differences in vowels
WFO.acceptable.match(match1, no.vowels=TRUE)
## Not run:
accepted.cases <- WFO.acceptable.match(match1, no.vowels=TRUE)
match1.accepted <- match1[accepted.cases == TRUE, ]
match1.notaccepted <- match1[accepted.cases == FALSE, ]
## End(Not run)
WFO.example World Flora Online (WFO) taxonomic backbone example data set
Description
This data set is a subset of the World Flora Online taxonomic backbone that allows running the
first set of examples. In practical applications, users should first download a static copy of the
Taxonomic Backbone data from http://www.worldfloraonline.org/downloadData .
Usage
data(WFO.example)
Source
World Flora Online. An Online Flora of All Known Plants. http://www.worldfloraonline.org
Examples
data(WFO.example)
WFO.match Standardize plant names according to World Flora Online taxonomic
backbone
Description
This package checks a list of species against the World Flora Online (WFO) taxonomic backbone.
The user needs to first download a static copy of the Taxonomic Backbone data from http://www.
worldfloraonline.org/downloadData.
Usage
WFO.match(spec.data = NULL, WFO.file = NULL, WFO.data = NULL,
no.dates = TRUE,
spec.name = "spec.name", Genus = "Genus", Species = "Species",
Infraspecific.rank = "Infraspecific.rank", Infraspecific = "Infraspecific",
Authorship = "Authorship", First.dist = FALSE,
acceptedNameUsageID.match = TRUE,
Fuzzy = 0.1, Fuzzy.force = FALSE, Fuzzy.max = 250, Fuzzy.min = TRUE,
Fuzzy.shortest = FALSE, Fuzzy.within = FALSE,
Fuzzy.two = TRUE, Fuzzy.one = TRUE,
squish = TRUE,
spec.name.tolower = FALSE, spec.name.nonumber = TRUE, spec.name.nobrackets = TRUE,
exclude.infraspecific = FALSE,
infraspecific.excluded = c("cultivar.", "f.", "sect.", "subf.", "subg.",
"subsp.", "subvar.", "var", "var.", "[infraspec.]", "fo.", "forma",
"nothosubsp.", "nothovar.", "sect."),
spec.name.sub = TRUE,
sub.pattern=c(" sp[.] A", " sp[.] B", " sp[.] C", " sp[.]", " spp[.]", " pl[.]",
" indet[.]", " ind[.]", " gen[.]", " g[.]", " fam[.]", " nov[.]", " prox[.]",
" cf[.]", " aff[.]", " s[.]s[.]", " s[.]l[.]",
" p[.]p[.]", " p[.] p[.]", "[?]", " inc[.]", " stet[.]", "Ca[.]",
"nom[.] cons[.]", "nom[.] dub[.]", " nom[.] err[.]", " nom[.] illeg[.]",
" nom[.] inval[.]", " nom[.] nov[.]", " nom[.] nud[.]", " nom[.] obl[.]",
" nom[.] prot[.]", " nom[.] rej[.]", " nom[.] supp[.]", " sensu auct[.]"),
verbose = TRUE, counter = 1000)
WFO.url(WFO.result = NULL, browse = FALSE, browse.rows = c(1:1), ...)
WFO.one(WFO.result = NULL, priority = "Accepted",
spec.name = NULL, Auth.dist = NULL, First.dist = NULL,
verbose = TRUE, counter = 1000)
WFO.browse(taxon, WFO.file = NULL, WFO.data = NULL,
accepted.only = FALSE, acceptedNameUsageID.match = TRUE, ...)
WFO.synonyms(taxon, WFO.file = NULL, WFO.data = NULL, ...)
WFO.family(taxon, WFO.file = NULL, WFO.data = NULL, ...)
Arguments
spec.data A data.frame containing variables with species names. In case that a character
vector is provided, then this vector will be converted to a data.frame
WFO.file File name of the static copy of the Taxonomic Backbone. If not NULL, then data
will be reloaded from this file.
WFO.data Data set with the static copy of the Taxonomic Backbone. Ignored if WFO.file
is not NULL.
no.dates Speeding up the loading of the WFO.data by not loading fields of ’created’ and
’modified’.
spec.name Name of the column with taxonomic names. In case that a spec.name is pro-
vided, then separate genus and species names will be ignored. For function
WFO.one, giving the name for this columns results in copying a submitted but
unmatched plant name into the scientificName of the results.
Genus Name of the column with the genus names.
Species Name of the column with the species names.
Infraspecific.rank
Name of the column with the infraspecific rank (such as "subsp.", "var." or "cul-
tivar.").
Infraspecific Name of the column with the infraspecific names.
Authorship Name of the column with the naming authorities.
First.dist If TRUE, then calculate the fuzzy distance between the first words of the submit-
ted and matched names (these are typically the genus names) .
acceptedNameUsageID.match
If TRUE, obtain the accepted name and others details from the earlier accepted-
NameUsageID.
Fuzzy If larger than 0, then attempt fuzzy matching in case an identifical taxonomic
name is not found in the World Flora Online. This argument will be used as ar-
gument max.distance in the internally called agrep. Note that fuzzy matching
is only possible for the spec.name.
Fuzzy.force If TRUE, always use the fuzzy matching algorithm, even when the spec.name
was matched exactly.
Fuzzy.max Maximum number of fuzzy matches.
Fuzzy.min If TRUE, limit the matching of names to those with the smallest Levenshtein
distance, calculated via adist.
Fuzzy.shortest If TRUE, limit the matching of names to those with the most similar length of
characters (this feature is expected to eliminate matches at infraspecific levels,
see examples).
Fuzzy.within If TRUE, limit the matching of names to those that contain exactly the submitted
plant name (this feature is expected to be useful when submitting plant names
that only contain a subset of the first characters of the species name, in order to
check for best matches manually afterwards).
Fuzzy.two If TRUE, in case that there were no fuzzy matches, limit the terms to be matched
to the first two (these are expected to be genus and species names).
Fuzzy.one If TRUE, in case that there were no fuzzy matches, limit the terms to be matched
to the first one (expected to be the genus name).
squish If TRUE, remove repeated whitespace and white space from the start and end of
the submitted full name via str_squish.
spec.name.tolower
If TRUE, then convert all characters of the spec.name to lower case via tolower.
spec.name.nonumber
If TRUE, then submitted spec.name that contain numbers will be interpreted as
genera, only matching the first word.
spec.name.nobrackets
If TRUE, then submitted spec.name then sections of the submitted name after
’(’ will be removed. Note that this will also remove sections after ’)’, such as
authorities for plant names that are in a separate column of WFO.
exclude.infraspecific
If TRUE, then exclude records that contain the infraspecific levels defined by
infraspecific.excluded.
infraspecific.excluded
Infraspecific levels (available from column ’verbatimTaxonRank’) excluded in
the results. Note that levels are excluded both in direct matches and matches
with the accepted name.
spec.name.sub If TRUE, then delete sections of the spec.name that match the sub.pattern.
sub.pattern Sections of the spec.name to be deleted
verbose Give details on the fuzzy matching process.
counter Progress on the matching process is reported by multiples of this counter.
WFO.result Result obtained via WFO.match.
browse If TRUE, then browse urls specified by browse.rows.
browse.rows Indices of row with the urls to be browsed.
priority Method of selecting the 1-to-1 matches. Option Accepted first limits candidates
to accepted names, with a possible second step of eliminating accepted names
that are synonyms. Option Synonym first limits candidates to those that are not
synonyms, with a possible second step of eliminating names that are not ac-
cepted. When the number of matches is larger than one after these steps, a third
algorithm picks the candidate with the smallest taxonID.
Auth.dist In case that the name of the variable with the Levenshtein distance between the
authorship names is provided, then the algorithm first prioritizes records with
the best match between the submitted and matched author names.
taxon Character string with the name of the taxon for which information will be given
(for families, different genera; for genera, different specieds; for species, in-
fraspecific levels).
accepted.only If TRUE, then only provide taxa with accepted names.
... Other arguments for browseURL (WFO.url) or WFO.match (WFO.browse).
Details
The principal function (WFO.match) matches plant names. Columns retrieved from the World Flora
Online are added to the provided input data.frame. In case that there are multiple matches, then
rows from the input data.frame are repeated.
Column ’Unique’ shows whether there was a unique match (or not match) in the WFO.
Column ’Matched’ shows whether there was a match in the WFO.
Column ’Fuzzy’ shows whether matching was done by the fuzzy method.
Column ’Fuzzy.dist’ gives the Levenshtein distance calculated between submitted and matched
plant names adist.
Column ’Auth.dist’ gives the Levenshtein distance calculated between submitted and matched au-
thorship names, if the former were provided adist.
Column ’Subseq’ gives different numbers for different matches for the same plant name.
Column ’Hybrid’ shows whether there was a hybrid character in the scientificName.
Column ’New.accepted’ shows whether the species details correspond to the current accepted name.
Column ’Old.status’ gives the taxonomic status of the first match with the non-blank accepted-
NameUsageID.
Column ’Old.ID’ gives the ID of the first match with the non-blank acceptedNameUsageID.
Column ’Old.name’ gives the name of the first match with the non-blank acceptedNameUsageID.
The function was inspired on the Taxonstand package that matches plant names against The Plant
List. Note that The Plant List has been static since 2013, but was used as the starting point for the
Taxonomic Backbone of the World Flora Online.
Function WFO.one finds one unique matching name for each submitted name. Via priority =
"Accepted", it first limits candidates to accepted names, with a possible second step of eliminating
accepted names that are synonyms. Via priority = "Synonym", it first limits candidates to those
that are not synonyms, with a possible second step of eliminating names that are not accepted. When
the number of matches is larger than one after these steps, a third algorithm picks the candidate
with the smallest taxonID. When a spec.name is given to WFO.one, the original submitted name is
inserted for the codescientificName.
When the user specifies the column with the Auth.dist, documenting the Levenshtein distance
between the submitted and matched authorities, then WFO.one first prioritizes records with best
match between Authorities.
Function WFO.browse lists all the genera for a family, all species for a genus or all infraspecific
levels for a species.
Function WFO.synonyms gives all records with the acceptedNameUsageID equal to the matched
accepted species shown in the first row.
Function WFO.family provides information on the order of vascular plants, based on information
available from vascular.families. Based on an internal list of bryophyte families, when the submitted
plant name is a bryophyte, the function returns ’bryophyte’ instead.
Value
The main function returns a data.set with the matched species details from the WFO.
Author(s)
<NAME> (World Agroforestry, CIFOR-ICRAF)
References
World Flora Online. An Online Flora of All Known Plants. http://www.worldfloraonline.org
Sigovini M, <NAME>, Tagliapietra. 2016. Open Nomenclature in the biodiversity era. Methods in
Ecology and Evolution 7: 1217-1225.
Kindt, R. 2020. WorldFlora: An R package for exact and fuzzy matching of plant names against
the World Flora Online taxonomic backbone data. Applications in Plant Sciences 8(9): e11388
See Also
WFO.match.fuzzyjoin
Examples
data(WFO.example)
spec.test <- data.frame(spec.name=c("Faidherbia albida", "Acacia albida",
"Omalanthus populneus", "Pygeum afric"))
WFO.match(spec.data=spec.test, WFO.data=WFO.example, counter=1, verbose=TRUE)
# Also calculate the Levenshtein distance for the genus
WFO.match(spec.data=spec.test, WFO.data=WFO.example, First.dist=TRUE,
counter=1, verbose=TRUE)
# Show all the fuzzy matches, which included those at infraspecifc level
e1 <- WFO.match(spec.data=spec.test, WFO.data=WFO.example, counter=1,
Fuzzy.min=FALSE, Fuzzy.shortest=FALSE, verbose=TRUE)
e1
# Use function WFO.one for a 1-to-1 match between submitted and matched names
WFO.one(e1)
# Hybrid species
WFO.match("Arabis divaricarpa", WFO.data=WFO.example)
WFO.match("Arabis x divaricarpa", WFO.data=WFO.example)
# Convert capitals to lower case
WFO.match("FAIDHERBIA ALBIDA", WFO.data=WFO.example, spec.name.tolower=TRUE)
# Remove sections of plant names that are equal to ' sp.' or ' indet. '
WFO.match("Prunus sp.", WFO.data=WFO.example, spec.name.sub=TRUE)
# Get urls, but do not open any
e2 <- WFO.match(spec.data=spec.test, WFO.data=WFO.example, counter=1, verbose=TRUE)
WFO.url(e2, browse=FALSE, browse.rows=c(1:nrow(e2)))
# Include input species names where no matches were found
# This happens when the name with original species names is provided to WFO.one
x1 <- WFO.match("World agroforestry", WFO.data=WFO.example)
WFO.one(x1, spec.name="spec.name")
## Not run:
# Cross-check with Taxonstand results
library(Taxonstand)
data(bryophytes)
# Give the file with the static copy of the Taxonomic Backbone data ('classification.txt')
# that was downloaded from \url{http://www.worldfloraonline.org/downloadData}.
# Possibly first use unzip(file.choose()) for the downloaded WFO_Backbone.zip
WFO.file.RK <- file.choose()
# check species name
w1 <- WFO.match(bryophytes[1:20, ], WFO.file=WFO.file.RK, spec.name="Full.name", counter=1)
w1
# check species name from list of names
w1 <- WFO.match(bryophytes$Full.name[1:20], WFO.file=WFO.file.RK, counter=1)
# re-check species names obtained via Taxonstand
# note that Taxonstand did not match some infraspecific names ('Higher.level')
r1 <- Taxonstand::TPL(bryophytes$Full.name[1:20], corr = TRUE)
w2 <- WFO.match(r1, WFO.file=WFO.file.RK, Genus="New.Genus", Species="New.Species",
Infraspecific.rank="New.Infraspecific.rank", Infraspecific="New.Infraspecific", counter=1)
w2
# only check genus and species
# specify different names for infraspecific columns as default to Taxonstand
w3 <- WFO.match(r1, WFO.file=WFO.file.RK, Genus="New.Genus", Species="New.Species",
Infraspecific.rank="none", Infraspecific="none", counter=1)
# note that the method above also retrieved infraspecific levels
# to only retrieve at the species level, match infraspecific levels with an empty column
r1$empty <- rep("", nrow(r1))
w4 <- WFO.match(r1, WFO.file=WFO.file.RK, Genus="New.Genus", Species="New.Species",
Infraspecific.rank="empty", Infraspecific="empty", counter=1)
# as an alternative to the method above, exclude all documented infraspecific levels
# from the results
w5 <- WFO.match(r1, WFO.file=WFO.file.RK, Genus="New.Genus", Species="New.Species",
exclude.infraspecific=TRUE, counter=1)
# save results to file
# utils::write.table(w4, quote=F, sep="\t", row.names=F, append=FALSE)
# limit the fuzzy matches to those that contain a shortened version of a species name
w6 <- WFO.match("Acacia caes", WFO.file=WFO.file.RK, Fuzzy=0.01, Fuzzy.within=TRUE, verbose=TRUE)
# show all the matches for a genus
spec.test1 <- data.frame(Genus=c("Casimiroa"))
w8 <- WFO.match(spec.test1, WFO.file=WFO.file.RK, exclude.infraspecific=TRUE, verbose=TRUE)
# show all listings at a next hierarchical level
WFO.data1 <- data.table::fread(WFO.file.RK, encoding="UTF-8")
WFO.browse("Pinaceae", WFO.data=WFO.data1)
WFO.browse("Pinaceae", WFO.data=WFO.data1, accepted.only=T)
WFO.browse("Tsuga", WFO.data=WFO.data1)
WFO.browse("Tsuga", WFO.data=WFO.data1, accepted.only=T)
WFO.browse("Olea europaea", WFO.data=WFO.data1)
WFO.browse("Olea europaea", WFO.data=WFO.data1, accepted.only=T)
# browsing only works at family, genus and species levels
# for orders, however, information is given from vascular.families
WFO.browse("Polypodiales", WFO.data=WFO.data1)
# submitting no name results in a list of all families
WFO.browse(, WFO.data=WFO.data1)
# give synonyms
WFO.synonyms("Olea europaea", WFO.data=WFO.data1)
# give order and other higher levels from family
WFO.family("Olea europaea", WFO.data=WFO.data1)
## End(Not run)
WFO.match.fuzzyjoin Standardize plant names according to World Flora Online taxonomic
backbone
Description
An alternative and typically faster method of matching records than WFO.match that allows for
different methods of calculating the fuzzy distance via stringdist.
Usage
WFO.match.fuzzyjoin(spec.data = NULL, WFO.file = NULL, WFO.data = NULL,
no.dates = TRUE,
spec.name = "spec.name",
Authorship = "Authorship",
stringdist.method = "lv", fuzzydist.max = 4,
Fuzzy.min = TRUE,
acceptedNameUsageID.match = TRUE,
squish = TRUE,
spec.name.tolower = FALSE, spec.name.nonumber = TRUE, spec.name.nobrackets = TRUE,
spec.name.sub = TRUE,
sub.pattern=c(" sp[.] A", " sp[.] B", " sp[.] C", " sp[.]", " spp[.]", " pl[.]",
" indet[.]", " ind[.]", " gen[.]", " g[.]", " fam[.]", " nov[.]", " prox[.]",
" cf[.]", " aff[.]", " s[.]s[.]", " s[.]l[.]",
" p[.]p[.]", " p[.] p[.]", "[?]", " inc[.]", " stet[.]", "Ca[.]",
"nom[.] cons[.]", "nom[.] dub[.]", " nom[.] err[.]", " nom[.] illeg[.]",
" nom[.] inval[.]", " nom[.] nov[.]", " nom[.] nud[.]", " nom[.] obl[.]",
" nom[.] prot[.]", " nom[.] rej[.]", " nom[.] supp[.]", " sensu auct[.]"))
Arguments
spec.data A data.frame containing variables with species names. In case that a character
vector is provided, then this vector will be converted to a data.frame
WFO.file File name of the static copy of the Taxonomic Backbone. If not NULL, then data
will be reloaded from this file.
WFO.data Data set with the static copy of the Taxonomic Backbone. Ignored if WFO.file
is not NULL.
no.dates Speeding up the loading of the WFO.data by not loading fields of ’created’ and
’modified’.
spec.name Name of the column with taxonomic names.
Authorship Name of the column with the naming authorities.
stringdist.method
Method used to calculate the fuzzy distance as used by in the internally called
stringdist.
fuzzydist.max Maximum distance used for joining as in stringdist_join.
Fuzzy.min Limit the results of fuzzy matching to those with the smallest distance.
acceptedNameUsageID.match
If TRUE, obtain the accepted name and others details from the earlier accepted-
NameUsageID.
squish If TRUE, remove repeated whitespace and white space from the start and end of
the submitted full name via str_squish.
spec.name.tolower
If TRUE, then convert all characters of the spec.name to lower case via tolower.
spec.name.nonumber
If TRUE, then submitted spec.name that contain numbers will be interpreted as
genera, only matching the first word.
spec.name.nobrackets
If TRUE, then submitted spec.name then sections of the submitted name after
’(’ will be removed. Note that this will also remove sections after ’)’, such as
authorities for plant names that are in a separate column of WFO.
spec.name.sub If TRUE, then delete sections of the spec.name that match the sub.pattern.
sub.pattern Sections of the spec.name to be deleted
Details
This function matches plant names by using the stringdist_left_join function internally. The
results are provided in a similar formatto those from WFO.match; therefore the WFO.one function
can be used in a next step of the analysis.
For large data sets the function may fail due to memory limits. A solution is to analyse different
subsets of large data, as for example shown by Kindt (2023).
Column ’Unique’ shows whether there was a unique match (or not match) in the WFO.
Column ’Matched’ shows whether there was a match in the WFO.
Column ’Fuzzy’ shows whether matching was done by the fuzzy method.
Column ’Fuzzy.dist’ gives the fuzzy distance calculated between submitted and matched plant
names, calculated internally with stringdist_left_join.
Column ’Auth.dist’ gives the Levenshtein distance calculated between submitted and matched au-
thorship names, if the former were provided. This distance is calculated in the same way as for the
WFO.match function via adist.
Column ’Subseq’ gives different numbers for different matches for the same plant name.
Column ’Hybrid’ shows whether there was a hybrid character in the scientificName.
Column ’New.accepted’ shows whether the species details correspond to the current accepted name.
Column ’Old.status’ gives the taxonomic status of the first match with the non-blank accepted-
NameUsageID.
Column ’Old.ID’ gives the ID of the first match with the non-blank acceptedNameUsageID.
Column ’Old.name’ gives the name of the first match with the non-blank acceptedNameUsageID.
Value
The main function returns a data.set with the matched species details from the WFO.
Author(s)
<NAME> (World Agroforestry, CIFOR-ICRAF)
References
World Flora Online. An Online Flora of All Known Plants. http://www.worldfloraonline.org
<NAME>, <NAME>, Tagliapietra. 2016. Open Nomenclature in the biodiversity era. Methods in
Ecology and Evolution 7: 1217-1225.
<NAME>. 2020. WorldFlora: An R package for exact and fuzzy matching of plant names against
the World Flora Online taxonomic backbone data. Applications in Plant Sciences 8(9): e11388
Kindt, R. 2023. Standardizing tree species names of GlobalTreeSearch with WorldFlora while test-
ing the faster matching function of WFO.match.fuzzyjoin. https://rpubs.com/Roeland-KINDT/
996500
See Also
WFO.match
Examples
## Not run:
data(WFO.example)
library(fuzzyjoin)
spec.test <- data.frame(spec.name=c("Faidherbia albida", "Acacia albida",
"Faidherbia albiad",
"Omalanthus populneus", "Pygeum afric"))
WFO.match.fuzzyjoin(spec.data=spec.test, WFO.data=WFO.example)
# Using the Damerau-Levenshtein distance
WFO.match.fuzzyjoin(spec.data=spec.test, WFO.data=WFO.example,
stringdist.method="dl")
## End(Not run)
WFO.prepare Prepare a data set for analysis with WFO.match
Description
The function attempts to split a list of species names with naming authorities in different fields of
botanical names and authorities.
Usage
WFO.prepare(spec.data = NULL, spec.full="spec.full",
squish = TRUE, spec.name.nonumber = TRUE,
spec.name.sub = TRUE,
sub.pattern = c(" sp[.] A", " sp[.] B", " sp[.] C", " sp[.]", " spp[.]", " pl[.]",
" indet[.]", " ind[.]", " gen[.]", " g[.]", " fam[.]", " nov[.]", " prox[.]",
" cf[.]", " aff[.]", " s[.]s[.]", " s[.]l[.]",
" p[.]p[.]", " p[.] p[.]", "[?]", " inc[.]", " stet[.]", "Ca[.]",
"nom[.] cons[.]", "nom[.] dub[.]", " nom[.] err[.]", " nom[.] illeg[.]",
" nom[.] inval[.]", " nom[.] nov[.]", " nom[.] nud[.]", " nom[.] obl[.]",
" nom[.] prot[.]", " nom[.] rej[.]", " nom[.] supp[.]", " sensu auct[.]"),
genus.2.flag = TRUE, species.2.flag = TRUE,
punctuation.flag = TRUE, pointless.flag = TRUE,
trinomial = c("cultivar.", "f.", "sect.", "subf.", "subg.",
"subsp.", "subvar.", "var.",
"CULTIVAR.", "SECT.", "SUBF.", "SUBG.", "SUBSP.", "SUBVAR.", "VAR."),
verbose = TRUE, counter = 1000)
Arguments
spec.data A data.frame containing variables with species names. In case that a character
vector is provided, then this vector will be converted to a data.frame
spec.full Name of the column with full taxonomic names.
squish If TRUE, remove repeated whitespace and white space from the start and end of
the submitted full name via str_squish.
spec.name.nonumber
If TRUE, then submitted spec.full that contain numbers will be interpreted as
genera, only matching the first word.
spec.name.sub If TRUE, then delete sections of the spec.full that match the sub.pattern.
sub.pattern Sections of the spec.full to be deleted
genus.2.flag Flag first part of the names with only 2 characters.
species.2.flag Flag second part of the names with only 2 characters.
punctuation.flag
Flag if the retained plant name has punctuation characters.
pointless.flag Flag if the retained plant name has sub.pattern without the point.
trinomial Descriptors for trinomial names. In case a trinomial name is expected, the
species name will be obtained from the first two words and the two words start-
ing with the trinomial descriptor.
verbose Give details on the process.
counter Progress on the process is reported by multiples of this counter.
Details
The function splits submitted names into the botanical name (’spec.name’) and the naming authority
(’Authorship’). When the submitted name contains section between brackets that are not at the
beginning of the naming authority, these sections will be removed.
Value
The function splits names in the botanical name and the naming authority.
Author(s)
<NAME> (World Agroforestry)
Examples
WFO.prepare("Terminalia superba Engl. & Diels (**) (In review)")
WFO.prepare("Sorbus aucuparia subsp. praemorsa (Guss.) Nyman")
WFO.prepare("Ormosia aff. coarctata Jackson")
WFO.prepare("Ormosia aff coarctata Jackson")
WFO.prepare("Ormosia /coarctata Jackson")
WFO.prepare("Qualea TMG 148 Aubl.")
# Note that the sub.pattern is ' cf.'
WFO.prepare("cf Myrcia M1")
WFO.remember Remember the location of the Taxonomic Backbone data set
Description
The function remembers where the Taxonomic Backbone data was downloaded to. In case that no
arguments are specified, then data.frame WFO.data will contain the previously specified Taxonomic
Backbone data.
Usage
WFO.download(WFO.url =
paste0("https://files.worldfloraonline.org/files/WFO_Backbone/",
"_WFOCompleteBackbone/WFO_Backbone.zip"),
save.dir = getwd(), WFO.remember = TRUE,
timeout = 500, ...)
WFO.remember(WFO.file = NULL, WFO.data = "WFO.data", WFO.pos = 1)
Arguments
WFO.url Hyperlink to the download from the World Flora Online.
save.dir Directory where the file will be downloaded and unzipped.
WFO.remember Remember the location of the file for WFO.remember.
timeout Timeout in seconds for some internet operations, to be modified among Options
Settings.
... Other arguments for download.file.
WFO.file File path to the Taxonomic Backbone data (’classification.txt’).
WFO.data Name of data set to be used by other WorldFlora functions.
WFO.pos Argument pos as in assign.
Details
These functions avoid that a user needs to reload and re-specify the location of the Taxonomic
Backbone data that was previously downloaded from the World Flora Online website. The location
is saved in a text file in the ’etc’ directory of the WorldFlora directory.
Value
The function remembers the local location of the Taxonomic Backbone data.
Author(s)
<NAME> (World Agroforestry)
Examples
## Not run:
# change the working directory
setwd(choose.dir())
# download the Taxonomic Backbone data
WFO.download()
# remember the previous download and avail the data as 'WFO.data'
WFO.remember()
# check
WFO.match("Faidherbia albida", WFO.data=WFO.data)
## End(Not run) |
weyl | cran | R | Package ‘weyl’
September 30, 2023
Type Package
Title The Weyl Algebra
Version 0.0-4
Depends methods, R (>= 3.5.0)
Maintainer <NAME> <<EMAIL>>
Description A suite of routines for Weyl algebras. Notation follows
Coutinho (1995, ISBN 0-521-55119-6, ``A Primer of Algebraic
D-Modules''). Uses 'disordR' discipline
(Hankin 2022 <doi:10.48550/ARXIV.2210.03856>). To cite
the package in publications, use Hankin
2022 <doi:10.48550/ARXIV.2212.09230>.
License GPL (>= 2)
LazyData yes
Suggests knitr,rmarkdown,testthat
VignetteBuilder knitr
Imports mathjaxr, disordR (>= 0.0-8), freealg (>= 1.0-4), spray (>= 1.0-19)
URL https://github.com/RobinHankin/weyl
BugReports https://github.com/RobinHankin/weyl/issues
RdMacros mathjaxr
R topics documented:
weyl-packag... 2
coeff... 3
constan... 4
degre... 5
derivatio... 6
di... 6
dot-clas... 7
dro... 8
grad... 9
identit... 10
Op... 11
print.wey... 12
rwey... 14
spra... 14
wey... 15
weyl-clas... 16
x_and_... 16
zer... 17
weyl-package The Weyl Algebra
Description
A suite of routines for Weyl algebras. Notation follows Coutinho (1995, ISBN 0-521-55119-6, "A
Primer of Algebraic D-Modules"). Uses ’disordR’ discipline (Hankin 2022 <doi:10.48550/ARXIV.2210.03856>).
To cite the package in publications, use Hankin 2022 <doi:10.48550/ARXIV.2212.09230>.
Details
The DESCRIPTION file:
Package: weyl
Type: Package
Title: The Weyl Algebra
Version: 0.0-4
Depends: methods, R (>= 3.5.0)
Authors@R: person(given=c("Robin", "<NAME>."), family="Hankin", role = c("aut","cre"), email="<EMAIL>
Maintainer: <NAME> <<EMAIL>>
Description: A suite of routines for Weyl algebras. Notation follows Coutinho (1995, ISBN 0-521-55119-6, "A Pri
License: GPL (>= 2)
LazyData: yes
Suggests: knitr,rmarkdown,testthat
VignetteBuilder: knitr
Imports: mathjaxr, disordR (>= 0.0-8), freealg (>= 1.0-4), spray (>= 1.0-19)
URL: https://github.com/RobinHankin/weyl
BugReports: https://github.com/RobinHankin/weyl/issues
RdMacros: mathjaxr
Author: <NAME> [aut, cre] (<https://orcid.org/0000-0001-5982-0415>)
Index of help topics:
Ops Arithmetic Ops Group Methods for the Weyl
algebra
coeffs Manipulate the coefficients of a weyl object
constant The constant term
degree The degree of a 'weyl' object
derivation Derivations
dim The dimension of a 'weyl' object
dot-class Class "dot"
drop Drop redundant information
grade The grade of a weyl object
identity The identity operator
print.weyl Print methods for weyl objects
rweyl Random weyl objects
spray Create spray objects
weyl The algebra and weyl objects
weyl-class Class "weyl"
weyl-package The Weyl Algebra
x_and_d Generating elements for the first Weyl algebra
zero The zero operator
Author(s)
NA
Maintainer: <NAME> <<EMAIL>>
Examples
x <- rweyl(d=1)
y <- rweyl(d=1)
z <- rweyl(d=1)
is.zero(x*(y*z) - (x*y)*z) # should be TRUE
coeffs Manipulate the coefficients of a weyl object
Description
Manipulate the coefficients of a weyl object. The coefficients are disord objects.
Usage
coeffs(S) <- value
Arguments
S A weyl object
value Numeric
Details
To access coefficients of a weyl object S, use spray::coeffs(S) [package idiom is coeffs(S)].
Similarly to access the index matrix use index(s).
The replacement method is package-specific; use coeffs(S) <-value.
Value
Extraction methods return a disord object (possibly dropped); replacement methods return a weyl
object.
Author(s)
<NAME>
Examples
(a <- rweyl(9))
coeffs(a)
coeffs(a)[coeffs(a)<3] <- 100
a
constant The constant term
Description
The constant of a weyl object is the coefficient of the term with all zeros.
Usage
constant(x, drop = TRUE)
constant(x) <- value
Arguments
x Object of class weyl
drop Boolean with default TRUE meaning to return the value of the coefficient, and
FALSE meaning to return the corresponding Weyl object
value Constant value to replace existing one
Value
Returns a numeric or weyl object
Note
The constant.weyl() function is somewhat awkward because it has to deal with the difficult case
where the constant is zero and drop=FALSE.
Author(s)
<NAME>
Examples
(a <- rweyl()+700)
constant(a)
constant(a,drop=FALSE)
constant(a) <- 0
constant(a)
constant(a,drop=FALSE)
constant(a+66) == constant(a) + 66
degree The degree of a weyl object
Description
The degree of a monomial weyl object xa ∂ b is defined as a + b. The degree of a general weyl object
expressed as a linear combination of monomials is the maximum of the degrees of these monomials.
Following Coutinho we have:
• deg(d1 + d2 ) ≤ max(deg(d1 ) + deg(d2 ))
• deg(d1 d2 ) = deg(d1 ) + deg(d2 )
• deg(d1 d2 − d2 d1 ) ≤ deg(d1 ) + deg(d2 ) − 2
Usage
deg(S)
Arguments
S Object of class weyl
Value
Nonnegative integer (or −∞ for the zero Weyl object)
Note
The degree of the zero object is conventionally −∞.
Author(s)
<NAME>
Examples
(a <- rweyl())
deg(a)
d1 <- rweyl(n=2)
d2 <- rweyl(n=2)
deg(d1+d2) <= deg(d1) + deg(d2)
deg(d1*d2) == deg(d1) + deg(d2)
deg(d1*d2-d2*d1) <= deg(d1) + deg(d2) -2
derivation Derivations
Description
A derivation D of an algebra A is a linear operator that satisfies D(d1 d2 ) = d1 D(d2 ) + D(d1 )d2 ,
for every d1 , d2 ∈ A. If a derivation is of the form D(d) = [d, f ] = df − f d for some fixed f ∈ A,
we say that D is an inner derivation.
Function as.der() returns a derivation with as.der(f)(g)=fg-gf.
Usage
as.der(S)
Arguments
S Weyl object
Value
Returns a function, a derivation
Author(s)
<NAME>
Examples
(o <- rweyl(n=2,d=2))
(f <- as.der(o))
d1 <-rweyl(n=1,d=2)
d2 <-rweyl(n=2,d=2)
f(d1*d2) == d1*f(d2) + f(d1)*d2 # should be TRUE
dim The dimension of a weyl object
Description
The dimension of a weyl algebra is the number of variables needed; it is half the spray::arity().
The dimension of a Weyl algebra generated by {x1 , x2 , . . . , xn , ∂x1 , ∂x2 , . . . , ∂xn } is n (not 2n).
Usage
## S3 method for class 'weyl'
dim(x)
Arguments
x Object of class weyl
Value
Integer
Note
Empty spray objects give zero-dimensional weyl objects.
Author(s)
<NAME>
Examples
(a <- rweyl())
dim(a)
dot-class Class “dot”
Description
The dot object is defined so that idiom like .[x,y] returns the commutator, that is, xy-yx.
The dot object is generated by running script inst/dot.Rmd, which includes some further discus-
sion and technical documentation, and creates file dot.rda which resides in the data/ directory.
Arguments
x Object of any class
i,j elements to commute
... Further arguments to dot_error(), currently ignored
Value
Always returns an object of the same class as xy.
Author(s)
<NAME>
Examples
x <- rweyl(n=1,d=2)
y <- rweyl(n=1,d=2)
z <- rweyl(n=1,d=2)
.[x,.[y,z]] + .[y,.[z,x]] + .[z,.[x,y]] # Jacobi identity
drop Drop redundant information
Description
Coerce constant weyl objects to numeric
Usage
drop(x)
Arguments
x Weyl object
Details
If its argument is a constant weyl object, coerce to numeric.
Value
Returns either a length-one numeric vector or its argument, a weyl object
Note
Many functions in the package take drop as an argument which, if TRUE, means that the function
returns a dropped value.
Author(s)
<NAME>
Examples
a <- rweyl() + 67
drop(a)
drop(idweyl(9))
drop(constant(a,drop=FALSE))
grade The grade of a weyl object
Description
The grade of a homogeneous term of a Weyl algebra is the sum of the powers. Thus the grade of
4xy 2 ∂x3 ∂y4 is 1 + 2 + 3 + 4 = 10.
The functionality documented here closely follows the equivalent in the clifford package.
Coutinho calls this the symbol map.
Usage
grade(C, n, drop=TRUE)
grade(C,n) <- value
grades(x)
Arguments
C,x Weyl object
n Integer vector specifying grades to extract
value Replacement value, a numeric vector
drop Boolean, with default TRUE meaning to coerce a constant operator to numeric,
and FALSE meaning not to
Details
Function grades() returns an (unordered) vector specifying the grades of the constituent terms.
Function grades<-() allows idiom such as grade(x,1:2) <- 7 to operate as expected [here to set
all coefficients of terms with grades 1 or 2 to value 7].
Function grade(C,n) returns a Weyl object with just the elements of grade g, where g %in% n.
The zero grade term, grade(C,0), is given more naturally by constant(C).
Value
Integer vector or weyl object
Author(s)
<NAME>
Examples
a <- rweyl(30)
grades(a)
grade(a,1:4)
grade(a,5:9) <- -99
a
identity The identity operator
Description
The identity operator maps any function to itself.
Usage
idweyl(d)
## S3 method for class 'weyl'
as.id(S)
is.id(S)
Arguments
d Integer specifying dimensionality of the weyl object (twice the spray arity)
S A weyl object
Value
A weyl object corresponding to the identity operator
Note
The identity function cannot be called “id()” because then R would not know whether to create a
spray or a weyl object.
Examples
idweyl(7)
a <- rweyl(d=5)
a
is.id(a)
is.id(1+a-a)
as.id(a)
a == a*1
a == a*as.id(a)
Ops Arithmetic Ops Group Methods for the Weyl algebra
Description
Allows arithmetic operators to be used for spray calculations, such as addition, multiplication, divi-
sion, integer powers, etc.
Idiom such as x^2 + y*z/5 should work as expected. Operations are the same as those of the spray
package except for *, which is interpreted as functional composition. A number of helper functions
are documented here (which are not designed for the end-user).
Usage
## S3 method for class 'weyl'
Ops(e1, e2 = NULL)
weyl_prod_helper1(a,b,c,d)
weyl_prod_helper2(a,b,c,d)
weyl_prod_helper3(a,b,c,d)
weyl_prod_univariate_onerow(S1,S2,func)
weyl_prod_univariate_nrow(S1,S2)
weyl_prod_multivariate_onerow_singlecolumn(S1,S2,column)
weyl_prod_multivariate_onerow_allcolumns(S1,S2)
weyl_prod_multivariate_nrow_allcolumns(S1,S2)
weyl_power_scalar(S,n)
Arguments
S,S1,S2,e1,e2 Objects of class weyl, elements of a Weyl algebra
a,b,c,d Integers, see details
column column to be multiplied
n Integer power (non-negative)
func Function used for products
Details
All arithmetic is as for spray objects, apart from * and ^. Here, * is interpreted as operator con-
catenation: Thus, if w1 and w2 are Weyl objects, then w1 w2 is defined as the operator that takes f
to w1 (w2 f ).
Functions such as weyl_prod_multivariate_nrow_allcolumns() are low-level helper functions
with self-explanatory names. In this context, “univariate” means the first Weyl algebra, gener-
ated by {x, ∂}, subject to x∂ − ∂x = 1; and “multivariate” means the algebra generated by
{x1 , x2 , . . . , xn , ∂x1 , ∂x2 , . . . , ∂xn }.
The product is somewhat user-customisable via option prodfunc, which affects function weyl_prod_univariate_onero
Currently the package offers three examples: weyl_prod_helper1(), weyl_prod_helper2(), and
weyl_prod_helper3(). These are algebraically identical but occupy different positions on the
efficiency-readability scale. The option defaults to weyl_prod_helper3(), which is the fastest but
most opaque. The vignette provides further details, motivation, and examples.
Value
Generally, return a weyl object
Note
Function weyl_prod_univariate_nrow() is present for completeness, it is not used in the package
Author(s)
<NAME>
Examples
x <- rweyl(n=1,d=2)
y <- rweyl(n=1,d=2)
z <- rweyl(n=2,d=2)
x*(y+z) == x*y + x*z
is.zero(x*(y*z) - (x*y)*z)
print.weyl Print methods for weyl objects
Description
Printing methods for weyl objects follow those for the spray package, with some additional func-
tionality.
Usage
## S3 method for class 'weyl'
print(x, ...)
Arguments
x A weyl object
... Further arguments, currently ignored
Details
Option polyform determines whether the object is to be printed in matrix form or polynomial
form: as in the spray package, this option governs dispatch to either print_spray_polyform() or
print_spray_matrixform().
> a <- rweyl()
> a # default print method
A member of the Weyl algebra:
x y z dx dy dz val
1 2 2 2 1 0 = 3
2 2 0 0 1 1 = 2
0 0 0 1 1 2 = 1
> options(polyform = TRUE)
> a
A member of the Weyl algebra:
+3*x*y^2*z^2*dx^2*dy +2*x^2*y^2*dy*dz +dx*dy*dz^2
> options(polyform = FALSE) # restore default
Irrespective of the value of polyform, option weylvars controls the variable names. If NULL (the
default), then sensible values are used: either [xyz] if the dimension is three or less, or integers.
But option weylvars is user-settable:
> options(weylvars=letters[18:20])
> a
A member of the Weyl algebra:
r s t dr ds dt val
1 2 2 2 1 0 = 3
2 2 0 0 1 1 = 2
0 0 0 1 1 2 = 1
> options(polyform=TRUE)
> a
A member of the Weyl algebra:
+3*r*s^2*t^2*dr^2*ds +2*r^2*s^2*ds*dt +dr*ds*dt^2
> options(polyform=FALSE) ; options(weylvars=NULL)
If the user sets weylvars, the print method tries to do the Right Thing (tm). If set to c("a","b","c"),
for example, the generators are named c(" a"," b"," c","da","db","dc") [note the spaces]. If
the algebra is univariate, the names will be something like d and x. No checking is performed and
if the length is not equal to the dimension, undesirable behaviour may occur. For the love of God,
do not use a variable named d. Internally, weylvars works by changing the sprayvars option in
the spray package.
Note that, as for spray objects, this option has no algebraic significance: it only affects the print
method.
Value
Returns a weyl object.
Author(s)
<NAME>
Examples
a <- rweyl()
print(a)
options(polyform=TRUE)
print(a)
rweyl Random weyl objects
Description
Creates random weyl objects: quick-and-dirty examples of Weyl algebra elements
Usage
rweyl(nterms = 3, vals = seq_len(nterms), dim = 3, powers = 0:2)
Arguments
nterms Number of terms in output
vals Values of coefficients
dim Dimension of weyl object
powers Set from which to sample the entries of the index matrix
Value
Returns a weyl object
Author(s)
<NAME>
Examples
rweyl()
rweyl(d=7)
spray Create spray objects
Description
Function spray() creates a sparse array; function weyl() coerces a spray object to a Weyl object.
Usage
spray(M,x,addrepeats=FALSE)
Arguments
M An integer-valued matrix, the index of the weyl object
x Numeric vector of coefficients
addrepeats Boolean, specifying whether repeated rows are to be added
Details
The function is discussed and motivated in the spray package.
Value
Return a weyl or a Boolean
Author(s)
<NAME>
Examples
(W <- spray(matrix(1:36,6,6),1:6))
weyl(W)
as.weyl(15,d=3)
weyl The algebra and weyl objects
Description
Basic functions for weyl objects
Usage
weyl(M)
is.weyl(M)
as.weyl(val,d)
is.ok.weyl(M)
Arguments
M A weyl or spray object
val,d Value and dimension for weyl object
Details
Function weyl() is the formal creator method; is.weyl() tests for weyl objects and is.ok.weyl()
checks for well-formed sprays. Function as.weyl() tries (but not very hard) to infer what the user
intended and return the right thing.
To create a spray object to pass to weyl(), use function spray(), which is a synonym for spray::spray().
Value
Return a weyl or a Boolean
Author(s)
<NAME>
Examples
(W <- spray(matrix(1:36,6,6),1:6))
weyl(W)
as.weyl(15,d=3)
weyl-class Class “weyl”
Description
The formal S4 class for weyls.
Objects from the Class
Objects can be created by calls of the form new("weyl", ...) but this is not encouraged. Use
functions weyl() or as.weyl() instead.
Author(s)
<NAME>
x_and_d Generating elements for the first Weyl algebra
Description
Variables x and d correspond to operator x and ∂x ; they are provided for convenience. These
elements generate the one-dimensional Weyl algebra.
Note that a similar system for multivariate Weyl algebras is not desirable. We might want to con-
sider the Weyl algebra generated by {x, y, z, ∂x , ∂y , ∂z } and correspondingly define R variables
x,y,z,dx,dy,dz. But then variable x is ambiguous: is it a member of the first Weyl algebra or the
third?
Usage
data(x_and_d)
Author(s)
<NAME>
Examples
d
x
.[d,x] # dx-xd==1
d^3 * x^4
(1-d*x*d)*(x^2-d^3)
zero The zero operator
Description
The zero operator maps any function to the zero function (which maps anything to zero). To test for
being zero, use spray::is.zero(); package idiom would be is.zero().
Usage
zero(d)
Arguments
d Integer specifying dimensionality of the weyl object (twice the spray arity)
Value
A weyl object corresponding to the zero operator (or a Boolean for is.zero())
Examples
(a <- rweyl(d=5))
is.zero(a)
is.zero(a-a)
is.zero(a*0)
a == a + zero(dim(a)) |
pampe | cran | R | Package ‘pampe’
October 14, 2022
Type Package
Title Implementation of the Panel Data Approach Method for Program
Evaluation
Version 1.1.2
Date 2015-11-06
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Implements the Panel Data Approach Method for program evaluation as devel-
oped in Hsiao, Ching and Ki Wan (2012). pampe estimates the effect of an intervention by com-
paring the evolution of the outcome for a unit affected by an intervention or treatment to the evo-
lution of the unit had it not been affected by the intervention.
Depends leaps
Enhances xtable
License GPL-2
NeedsCompilation no
Repository CRAN
Date/Publication 2015-11-07 07:24:16
R topics documented:
growt... 2
pamp... 2
pampe-interna... 7
robustnes... 8
growth Example Data for pampe function from the pampe package
Description
Dataset for the example from the pampe function of the pampe package. The growth dataset, ob-
tained from the supplemental materials of Hsiao, Steve Ching and Ki Wan (2012) contains informa-
tion on the quarterly real GDP growth rate of 24 countries (the donor pool) and Hong Kong from
1993 Q1 to 2008 Q1, computed as the change with respect to the same quarter in the previous year.
The data is organized in standard cross-sectional data format, with the variables (the quarterly real
GDP growth rate of the countries in the donor pool act as explanatory variables) extending across
the columns and the quarters (time-periods) extending across rows. In this case the treated unit,
Hong Kong, is in the first column whereas the possible controls (countries in the original donor
pool) are in columns 2:25. It is important for the user to have his or her data in this standard format
to correctly apply the methodology. Naming the rows and especially the columns is also strongly
recommended. Be careful to avoid characters such as spacing when naming.
Usage
data(growth)
Format
A data frame with 61 observations on 25 variables (countries/units).
Source
<NAME>, <NAME>, <NAME> (2012). "A Panel Data Approach for Program Evaluation:
Measuring the Benefits of Political and Economic Integration of Hong Kong with Mainland China".
Journal of Applied Econometrics, 27(5), 705-740. ISSN 1099-1255. doi:10.1002/ jae.1230. URL
http://dx.doi.org/10.1002/jae.1230.
See Also
pampe
pampe Implementation of the Panel Data Approach for Program Evaluation
Description
Implements the Panel Data Approach for program evaluation as developed in Hsiao, <NAME>
and <NAME> (2012). pampe estimates the effect of an intervention by comparing the evolution of
the outcome for a unit affected by an intervention or treatment to the evolution of the unit had it not
been affected by the intervention.
Usage
pampe(time.pretr, time.tr, treated, controls = "All", data,
nbest = 1, nvmax = length(controls), select = "AICc", placebos = FALSE)
data(growth)
Arguments
time.pretr The pre-treatment periods, up to the introduction of the treatment. For example,
if you have 10 pre-treatment periods and the treatment was introduced in the
11th period, this argument could be 1:10.
time.tr The treatment period plus the periods for which you want to check the effect.
For example, if the treatment was introduced in the 11th period and you want
results for 9 more periods, it would be 11:20.
treated The treated unit, the unit that receives the intervention. It can be a name or the
index of the column if the columns in the data matrix are named (recommended).
For example, in the case of the growth data included in the package, the name of
the treated unit is "HongKong", and it is in the first column of the data matrix,
so this argument can be either "HongKong" or 1.
controls The units used as controls to calculate the counterfactual, that have not received
the treatment. By default, all the remaining (after removing the treated) columns
in the data matrix are included as columns, but specific controls can be speci-
fied using their column name, e.g. c("Australia", "Austria", "Canada") or their
column index, e.g. 2:4.
data The name of the data matrix to be used, e.g. growth. The data matrix should be
in standard cross-sectional format: with the variables (the controls/units in the
donor pool pool act as explanatory variables) extending across the columns and
the quarters (time-periods) extending across rows. It is important for the user
to have his or her data in this standard format to correctly apply the function.
Naming the rows and especially the columns is also strongly recommended. Be
careful to avoid characters such as spacing when naming.
nbest The original method by Hsiao, <NAME> and <NAME> (2012) specifies to keep
the best model in terms of R squared for each order, hence the default is one.
However the user might choose to keep the best 2, 3... before moving on to the
second step of the method by switching this argument default.
nvmax How many subsets of controls should the method check. The original method
by Hsiao says to check all subsets up to the biggest size, hence the default; but
if the pre-treatment period is too short that might not be possible. If it’s too big
but attempt to apply the method regardless, it will throw out an error telling you
to change this argument or reduce the number of controls.
select The model selection criteria for the second step of the method. Originally they
propose either AICc (default) or AIC. The user can choose between those two
or BIC as well.
placebos Options between "controls", "time", or "Both". The "controls" placebo method
reassigns the treatment to all controls and computes the treatment effect, the
"time" placebo reassigns treatment to a time period previous to the treatment
and computes the treatment effect as well. You can then plot the results.
Details
As proposed by Hsiao, <NAME> and <NAME> (2012), the panel data approach method for pro-
gram evaluation estimates the effect of a treatment or policy intervention in the aggregate outcome
of a single affected unit by exploiting the dependence among cross-sectional units to construct a
counterfactual of the treated unit(s), which is an estimate of how the affected unit would have de-
veloped in the absence of an intervention. The estimated effect of the policy intervention is therefore
simply the difference between the actual observed outcome of the treated unit and this estimated
counterfactual.
The way they propose to estimate the outcome of the treated unit under no treatment, Y^0_1t, is to
use the following modeling strategy: use R^2 (or likelihood values) in order to select the best OLS
estimator for Y^0_1t using j out of the J units in the donor pool, denoted by M(j)* for j=1, ..., J;
then choose M(m)* from M(1)*, ..., M(J)* in terms of a model selection criterion, like AICc, AIC
or BIC. Note that the method calculates OLS models of up to J+1 parameters; so that if the length
of the pre-treatment period t=1, 2, ..., T’-1 is not of a much higher order than that, the regressions
M(J-1)*, M(J)* can not be calculated because there are not enough degrees of freedom.
To avoid this problem, the pampe package proposes the following slight modification to the pre-
viously outlined modeling strategy: use R^2 in order to select the best OLS estimator for Y^0_1t
using j out of the J units in the donor pool, denoted by M(j)* for j=1, ..., T_0-4; then choose M(m)*
from M(1)*, ..., M(T_0-4)* in terms of a model selection criterion (in our case AICc). Note that the
key difference is that while we allowed models up to M(J)*, this is now modified to allow models up
to M(T_0-4)*, with T_0-4<J, which allows for at least 3 degrees of freedom. This is implemented
through the default value of nvmax, which is equal to J, or if not possible, to J-4. The user can of
course override this default.
The results of the function include, among others, the optimal model chosen by the strategy (an
object of class lm) and the estimated treatment effects (difference between the actual outcome and
the counterfactual outcome) for the placebo tests if the user chooses to carry them out.
Value
A list of objects, including:
controls a named vector of the controls finally included in the model
model an object of class lm with the optimal model. Usual methods such as fitted(),
residuals(), ... can be used on it.
counterfactual the path of the estimated counterfactual for the time.pretr and time.tr periods.
placebo.ctrl if the user has chosen to perform the placebo test which reassigns the treatment
to the controls in the original donor pool, this element is a list which includes
another two elements: mspe and tr.effect. The first includes the mspe for the pre-
treatment period (time.pretr) and the second is the estimated treatment effect for
the treated unit in the first column and for the countries in the original donor
pool in the remaining columns.
placebo.time same as placebo.ctrl but with the reassignment of the treatment in time, to peri-
ods in the pre-treatment period.
data initial data, for further use.
Note
Check the references for more information on the placebo studies. The leaps package by Lumley is
required for pampe to run properly.
Author(s)
<NAME>
References
<NAME>, <NAME>, <NAME> (2010). "Control Methods for Comparative Case Studies:
Estimating the Effect of California’s Tobacco Control Program". American Statistical Association,
(105), 493-505.
<NAME>, <NAME>, <NAME> (2014). "Comparative Politics and the Synthetic Control
Method". American Journal of Political Science. URL http://dx.doi.org/10.2139/ ssrn.1950298.
<NAME>, <NAME> (2003). "The Economic Costs of Conflict: A Case Study of the Basque
Country". American Economic Review, (1), 113-132. URL http://ideas.repec. org/a/aea/aecrev/v93y2003i1p113-
132.html.
<NAME>, <NAME>, <NAME> (2012). "A Panel Data Approach for Program Evaluation:
Measuring the Benefits of Political and Economic Integration of Hong Kong with Mainland China."
Journal of Applied Econometrics, 27(5), 705-740. ISSN 1099-1255. doi:10.1002/ jae.1230. URL
http://dx.doi.org/10.1002/jae.1230.
<NAME> (2014). leaps: Regression Subset Selection. R package version 2.9, URL http: //CRAN.R-
project.org/package=leaps.
See Also
leaps
Examples
##Load the sample dataset
data(growth)
##First we apply the method to the political integration of Hong Kong
##as was done by Hsiao et al. (2012) using AICc as selection criteria
treated <- "HongKong"
time.pretr <- 1:18
time.tr <- 19:44
possible.ctrls <- c('China','Indonesia','Japan','Korea','Malaysia','Philippines',
'Singapore','Taiwan','UnitedStates','Thailand')
##Call the function
pol.integ <- pampe(time.pretr=time.pretr, time.tr=time.tr, treated=treated,
controls=possible.ctrls, data=growth)
##The function automatically prints out a summary of the optimal model
##or we can call it ourselves
summary(pol.integ)
##The method plot() works on object of class pampe to produce a plot of the actual evolution
##of the treated unit together with the predicted counterfactual path.
##A simple plot call to our saved pampe object would give us the desired plot
plot(pol.integ)
##User-generated plot
##A plot of the estimated counterfactual together with the actual outcome
matplot(c(time.pretr, time.tr),cbind(growth[c(time.pretr, time.tr),1], pol.integ$counterfactual),
type="l", ylab="GDP growth", xlab="", ylim=c(-0.15,0.15), col=1, lwd=3, xaxt="n")
axis(1, at=c(time.pretr, time.tr)[c(seq(2, length(c(time.pretr, time.tr)), by=2))],
labels=c(rownames(growth)[c(time.pretr, time.tr)[c(seq(2, length(c(time.pretr, time.tr)),
by=2))]]), las=3)
title(xlab="Quarter",mgp=c(3.6,0.5,0))
legend("bottomright",c("Hong Kong", "predicted Hong Kong"), col=1, lty=c(1,2), lwd=3)
abline(v=time.pretr[length(time.pretr)],lty=3, lwd=3)
##Now we include placebo tests
pol.integ.placebos <- pampe(time.pretr=time.pretr, time.tr=time.tr, treated=treated,
controls=possible.ctrls, data=growth, placebos="Both")
##We can use the plot method again and check the results in the viewer
plot(pol.integ.placebos)
##Or create user-generated plots
##Plot of the placebos-controls
mspe <- pol.integ.placebos$placebo.ctrl$mspe
linewidth <- matrix(2, 1, ncol(mspe)-1)
linewidth <- append(linewidth, 5, after = 0)
matplot(c(time.pretr, time.tr), pol.integ.placebos$placebo.ctrl$tr.effect,
type="l", xlab="", ylab="GDP growth gap", col=c("red",matrix(1, 1, ncol(mspe)-1)),
lty=c(1,matrix(2, 1, ncol(mspe)-1)), lwd=linewidth, ylim=c(-0.35,0.2), , xaxt="n")
axis(1, at=c(time.pretr, time.tr)[c(seq(4, length(c(time.pretr, time.tr)), by=4))],
labels=c(rownames(growth)[c(time.pretr, time.tr)[c(seq(4, length(c(time.pretr, time.tr)),
by=4))]]), las=3)
title(xlab="Quarter",mgp=c(3.6,0.5,0))
legend("topleft",c("Hong Kong", "Controls"),col=c("red", 1),lty=c(1,2),lwd=c(5,2))
abline(h=0,lty=3, lwd=3)
abline(v=time.pretr[length(time.pretr)],lty=3, lwd=3)
## The estimated effect for Hong Kong does not appear to be significant
##since it is not an outlier
##Plot of the placebos-in-time
##For example let's plot the first reassignment
placebo.in.time1 <- pol.integ.placebos$placebo.time$tr.effect[,2]+growth[c(time.pretr, time.tr),1]
matplot(c(time.pretr, time.tr),cbind(growth[c(time.pretr, time.tr),1],
pol.integ.placebos$counterfactual, placebo.in.time1), type="l", ylab="GDP growth",
xlab="", ylim=c(-0.25,0.2), col=1, lwd=3, xaxt="n")
axis(1, at=c(time.pretr, time.tr)[c(seq(4, length(c(time.pretr, time.tr)), by=4))],
labels=c(rownames(growth)[c(time.pretr, time.tr)[c(seq(4, length(c(time.pretr, time.tr)),
by=4))]]), las=3)
title(xlab="Quarter",mgp=c(3.6,0.5,0))
legend("bottomleft",c("actual", "predicted", paste("placebo",
colnames(pol.integ.placebos$placebo.time$tr.effect)[2], sep=" ")), col=1, lty=c(1,2,3), lwd=3)
abline(v=time.pretr[length(time.pretr)],lty=2, lwd=3)
abline(v=which(colnames(pol.integ.placebos$placebo.time$tr.effect)[2]==rownames(growth)),
lty=3, lwd=3)
##We can also plot the gaps all at the same time
mspe <- pol.integ.placebos$placebo.time$mspe
linewidth <- matrix(2, 1, ncol(mspe)-1)
linewidth <- append(linewidth, 5, after = 0)
matplot(c(time.pretr, time.tr), pol.integ.placebos$placebo.time$tr.effect,
type="l", xlab="", ylab="GDP growth gap", col=c("red",matrix(1, 1, ncol(mspe)-1)),
lty=c(1,matrix(2, 1, ncol(mspe)-1)), lwd=linewidth, ylim=c(-0.35,0.2), , xaxt="n")
axis(1, at=c(time.pretr, time.tr)[c(seq(4, length(c(time.pretr, time.tr)), by=4))],
labels=c(rownames(growth)[c(time.pretr, time.tr)[c(seq(4, length(c(time.pretr, time.tr)),
by=4))]]), las=3)
title(xlab="Quarter",mgp=c(3.6,0.5,0))
legend("topleft",c("Hong Kong", "Controls"),col=c("red", 1),lty=c(1,2),lwd=c(5,2))
abline(h=0,lty=3, lwd=3)
##Not significant either
##Leave-one-out robustness check
robust <- robustness(pol.integ)
plot(robust)
pampe-internal Implementation of the Panel Data Approach for Program Evaluation
Description
Description of internal function for the pampe package, regsubsets2aic. Takes the models estimated
using regsubsets and orders them according to a model selection criteria like AIC, AICc, or BIC.
Usage
regsubsets2aic(x,y,z)
Arguments
x A matrix of predictors
y A response vector
z Output of regsubsets
See Also
pampe,leaps
robustness Robustness check for the Implementation of the Panel Data Approach
for Program Evaluation
Description
Implements a leave-one-out robustness check for the Panel Data Approach for program evaluation
as developed in Hsiao, <NAME> and <NAME> (2012). Robustness must be run after pampe
and will automatically and iteratively remove each one of the controls resulted in pampe to check
whether the results obtained in pampe are robust or are driven by a single control
Usage
robustness(pampe.object)
Arguments
pampe.object The object resulting from previously running the pampe function.
Note
Check the references in pampe for more information on the placebo studies. The leaps package by
Lumley is required for pampe to run properly.
Author(s)
<NAME>
See Also
pampe leaps |
@types/sha | npm | JavaScript | [Installation](#installation)
===
> `npm install --save @types/sha`
[Summary](#summary)
===
This package contains type definitions for sha (<https://github.com/ForbesLindesay/sha>).
[Details](#details)
===
Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/sha>.
[index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/sha/index.d.ts)
---
```
/// <reference types="node" /import { Transform } from "stream";
export type CheckCallback<R> = (err: Error | null) => R;
export type GetCallback = (err: Error | null, actual: string) => void;
export interface ShaOptions {
/** defaults to `sha1` and can be any of the algorithms supported by `crypto.createHash` */
algorithm?: string | undefined;
}
/**
* Asynchronously check that `fileName` has a "hash" of `expected`. The callback will be called with either `null`
* or an error (indicating that they did not match).
*/
export function check<R>(fileName: string, expected: string, cb: CheckCallback<R>): void | R;
export function check<R>(fileName: string, expected: string, options: ShaOptions, cb: CheckCallback<R>): void | R;
/**
* Synchronously check that `fileName` has a "hash" of `expected`. Throws if they do not match.
*/
export function checkSync(fileName: string, expected: string, options?: ShaOptions): void;
/**
* Asynchronously get the "hash" of `fileName`. The callback will be called with an optional `error` object and the
* (lower cased) hex digest of the hash.
*/
export function get(fileName: string, cb: GetCallback): void;
export function get(fileName: string, options: ShaOptions, cb: GetCallback): void;
/** Synchronously get the "hash" of `fileName`. */
export function getSync(fileName: string, options?: ShaOptions): string;
/**
* Check the hash of a stream without ever buffering it. This is a pass through stream so you can do things like:
*
* fs.createReadStream('src')
* .pipe(sha.stream('expected'))
* .pipe(fs.createWriteStream('dest'))
*
* `dest` will be a complete copy of `src` and an error will be emitted if the hash did not match `'expected'`.
*/
export function stream(expected: string, options?: ShaOptions): Transform;
```
### [Additional Details](#additional-details)
* Last updated: Wed, 18 Oct 2023 11:45:06 GMT
* Dependencies: [@types/node](https://npmjs.com/package/@types/node)
[Credits](#credits)
===
These definitions were written by [<NAME>](https://github.com/oBusk).
Readme
---
### Keywords
none |
KONPsurv | cran | R | Package ‘KONPsurv’
October 12, 2022
Type Package
Title KONP Tests: Powerful K-Sample Tests for Right-Censored Data
Version 1.0.4
Date 2021-12-27
Author <NAME> [cre, aut],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Description The K-sample omnibus non-proportional hazards (KONP) tests are powerful non-
parametric tests for comparing K (>=2) hazard functions based on right-
censored data (Gorfine, Schlesinger and Hsu, 2020, <doi:10.1177/0962280220907355>). These tests are con-
sistent against any differences between the hazard functions of the groups. The KONP tests are of-
ten more powerful than other existing tests, especially under non-proportional hazard functions.
License GPL (>= 2)
Imports survival,Rcpp (>= 0.12.16)
LinkingTo Rcpp
RoxygenNote 7.0.0
Encoding UTF-8
Suggests testthat
NeedsCompilation yes
Repository CRAN
Date/Publication 2022-01-03 17:10:12 UTC
R topics documented:
KONPsurv-packag... 2
carcinom... 2
gastri... 3
konp_tes... 4
KONPsurv-package KONP Tests for Testing the Equality of K Distributions for Right-
Censored Data
Description
An implementation of the K-sample omnibus non-proportional hazrds (KONP) tests.
The KONP tests are powerful non-parametric tests for comparing K (>=2) hazard functions based
on right-censored data. These tests are consistent against any differences between the hazard func-
tions of the groups. The KONP tests are often more powerful than other existing tests, especially
under non-proportional hazard functions.
Details
The package contains one function:
konp_test: non-parametric tests for equality of K distributions using right-censored data.
Author(s)
Author and Maintainer: <NAME> <<EMAIL>>
Author: <NAME> <<EMAIL>>
References
Gorfine, M., Schlesinger, M., & <NAME>. (2020). K-sample omnibus non-proportional hazards
tests based on right-censored data. Statistical Methods in Medical Research, 29(10), 2830–2850.
doi: 10.1177/0962280220907355
Examples
# gastric cancer data
data(gastric)
konp_test(gastric$time, gastric$status, gastric$group, n_perm=10^3)
carcinoma Urothelial carcinoma.
Description
Survival data from a trial comparing chemotherapy versus atezolizumab in the treatment of Urothe-
lial carcinoma.
Usage
data(carcinoma)
Format
A data frame with 625 observations (316 in the atezolizumab group and 309 chemotherapy group)
with the following 3 columns:
time the observed follow-up times in days.
status the event indicators, 0=right censored, 1= event.
group the group labels, 1 = atezolizumab, 2 = chemotherapy.
References
<NAME>, <NAME>, van der Heijden MS, et al. Atezolizumab versus chemotherapy in patients with
platinum-treated locally advanced or metastatic urothelial carcinoma (IMvigor211): a multicentre,
open-label, phase 3 randomised controlled trial. Lancet 2018; 391: 748-757.
gastric Gastric Cancer Data.
Description
Survival data from a trial comparing chemotherapy versus combined chemotherapy plus radiother-
apy in the treatment of gastric cancer.
Usage
data(gastric)
Format
A data frame with 90 observations (45 in each treatment group) with the following 3 columns:
time the observed follow-up times in days.
status the event indicators, 0=right censored, 1= event.
group the group labels, 1 = chemotherapy, 2 = chemotherapy plus radiotherapy.
Source
<NAME>. and <NAME>. (1985) A two-sample test sensitive to crossing hazards in
uncensored and singly censored data. Biometrics 41, 643–652. (Page 649).
References
Gastrointestinal Tumor Study Group: <NAME>., <NAME>., <NAME>., <NAME>.
O., <NAME>., et al. (1982). A comparison of combination chemotherapy and combined modality
therapy for locally advanced gastricarcinoma. Cancer 49, 1771-1777.
konp_test KONP tests are K-sample Omnibus Non-Proportional hazards tests
for right-censored data.
Description
KONP tests are K-sample Omnibus Non-Proportional hazards tests for right-censored data.
Usage
konp_test(time, status, group, n_perm, n_impu = 1)
Arguments
time A vector of the observed follow-up times.
status A vector of event indicators, 0=right censored, 1= event at time.
group A vector denoting the group labels, must contain at least two different values.
n_perm The number of permutations.
n_impu The number of imputations, for each imputation n_perm permutations will be
executed.
Details
The KONP tests are powerful non-parametric tests for comparing K (>=2) hazard functions based
on right-censored data. These tests are consistent against any differences between the hazard func-
tions of the groups. The KONP tests are often more powerful than other existing tests, especially
under non-proportional hazard functions.
Value
Three test statistics and their respective p-values are returned:
pv_chisq - returns the p-value based on the KONP test chi-square statistic.
pv_lr - returns the p-value based on the KONP test likelihood ratio statistic.
pv_cauchy - returns the p-value based on the KONP-based Cauchy-combination test statistic.
chisq_test_stat - returns the KONP test chi-squared test statistic.
lr_test_stat - returns the KONP test likelihood-ratio test statistic.
cauchy_test_stat - returns the KONP-based Cauchy-combination test statistic.
Examples
# gastric cancer data
data(gastric)
konp_test(gastric$time, gastric$status, gastric$group, n_perm=10^3) |
entropart | cran | R | Package ‘entropart’
September 26, 2023
Type Package
Title Entropy Partitioning to Measure Diversity
Version 1.6-13
Description Measurement and partitioning of diversity, based on Tsallis entropy, following Mar-
con and Herault (2015) <doi:10.18637/jss.v067.i08>.
'entropart' provides functions to calculate alpha, beta and gamma diversity of communities, in-
cluding phylogenetic and functional diversity.
Estimation-bias corrections are available.
URL https://github.com/EricMarcon/entropart
BugReports https://github.com/EricMarcon/entropart/issues
License GNU General Public License
Imports ape, EntropyEstimation, ggplot2, ggpubr, graphics, grDevices,
parallel, reshape2, rlang, stats, tibble, utils, vegan
Suggests ade4, knitr, pkgdown, rmarkdown, testthat
LazyData true
VignetteBuilder knitr
SystemRequirements pandoc
Encoding UTF-8
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-5249-321X>),
<NAME> [aut] (<https://orcid.org/0000-0002-6950-7286>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-09-26 14:40:02 UTC
R topics documented:
entropart-packag... 3
AbdFreqCoun... 4
Accumulatio... 6
Allen... 9
AlphaDiversit... 10
AlphaEntrop... 11
BetaDiversit... 13
BetaEntrop... 14
ChaoP... 16
CommunityProfil... 17
Coverag... 20
Diversit... 21
DivEs... 23
DivPar... 26
DivProfil... 28
Dq... 31
EightSpAbundanc... 33
EightSpTre... 34
En... 34
EntropyC... 35
exp... 37
GammaDiversit... 38
GammaEntrop... 39
GenSimpso... 40
Hq... 43
HqzBet... 45
Hurlber... 47
KL... 49
ln... 50
MC Utilitie... 51
MCdiversit... 52
MCentrop... 53
MetaCommunit... 54
Optimal.Similarit... 56
Paracou618.dis... 57
Paracou618.Functiona... 58
Paracou618.M... 59
Paracou618.Taxonom... 60
PDF... 60
PhyloAppl... 62
PhyloBetaEntrop... 63
PhyloDiversit... 66
PhyloEntrop... 68
PhyloValu... 71
PPtre... 72
RA... 73
Ra... 75
rCommunit... 77
Richnes... 79
Shanno... 81
ShannonBet... 84
Simpso... 86
SimpsonBet... 88
SimTes... 90
SpeciesDistributio... 92
Tsalli... 95
TsallisBet... 98
entropart-package Entropy Partitioning to Measure Diversity
Description
Functions to calculate alpha, beta and gamma diversity of communities, including phylogenetic and
functional diversity.
Estimation-bias corrections are available.
Details
In the entropart package, individuals of different "species" are counted in several "communities"
which may (or not) be agregated to define a "metacommunity". In the metacommunity, the prob-
ability to find a species in the weighted average of probabilities in communities. This is a naming
convention, which may correspond to plots in a forest inventory or any data organized the same
way.
Basic functions allow computing diversity of a community. Data is simply a vector of probabilities
(summing up to 1) or of abundances (integer values that are numbers of individuals). Calculate
entropy with functions such as Tsallis, Shannon, Simpson, Hurlbert or GenSimpson and explicit
diversity (i.e. effective number of species) with Diversity and others. By default, the best available
estimator of diversity will be used, according to the data.
Communities can be simulated by rCommunity, explicitely declared as a species distribution (as.AbdVector
or as.ProbaVector), and plotted.
Phylogenetic entropy and diversity can be calculated if a phylogenetic (or functional), ultrametric
tree is provided. See PhyloEntropy, Rao for examples of entropy and PhyloDiversity to calculate
phylodiversity, with the state-of-the-art estimation-bias correction. Similarity-based diversity is
calculated with Dqz, based on a similarity matrix.
The simplest way to import data is to organize it into two text files. The first file should contain
abundance data: the first column named Species for species names, and a column for each com-
munity.
The second file should contain the community weights in two columns. The first one, named
Communities should contain their names and the second one, named Weights, their weights.
Files can be read and data imported by code such as:
Abundances <- read.csv(file="Abundances.csv", row.names = 1)
Weights <- read.csv(file="Weights.csv")
MC <- MetaCommunity(Abundances, Weights)
The last line of the code calls the MetaCommunity function to create an object that will be used by all
metacommunity functions, such as DivPart (to partition diversity), DivEst (to partition diversity
and calculate confidence interval of its estimation) or DivProfile (to compute diversity profiles).
A full documentation is available in the vignette. Type: vignette("entropart"). A quick in-
trouction is in vignette("introduction", "entropart").
Author(s)
<NAME>, <NAME>
References
<NAME>., <NAME>., <NAME>., and <NAME>. (2017). The Generalized Simpson’s Entropy is
a Measure of Biodiversity. Plos One, 12(3): e0173305.
<NAME>. (2015) Practical Estimation of Diversity from Abundance Data. HAL 01212435: 1-27.
<NAME>. and <NAME>. (2015). entropart: An R Package to Measure and Partition Diversity.
Journal of Statistical Software, 67(8): 1-26.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>., <NAME>. and <NAME>. (2012). The Decomposition of Shannon’s
Entropy and a Confidence Interval for Beta Diversity. Oikos 121(4): 516-522.
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
AbdFreqCount Abundance Frequency Count of a Community
Description
Counts the number of species observed the same number of times.
Usage
AbdFreqCount(Ns, Level = NULL, PCorrection="Chao2015", Unveiling="geom",
RCorrection="Rarefy", CheckArguments = TRUE)
Arguments
Ns A numeric vector containing species abundances.
Level The level of interpolation or extrapolation. It may be an an arbitrary sample size
(an integer) or a sample coverage (a number between 0 and 1).
PCorrection A string containing one of the possible corrections to estimate a probability dis-
tribution in as.ProbaVector: "Chao2015" is the default value. Used only for
extrapolation.
Unveiling A string containing one of the possible unveiling methods to estimate the prob-
abilities of the unobserved species in as.ProbaVector: "geom" (geometric: the
unobserved species distribution is geometric) is the default value. Used only for
extrapolation.
RCorrection A string containing a correction recognized by Richness to evaluate the total
number of species in as.ProbaVector. "Rarefy" is the default value to esti-
mate the number of species such that the richness of the asymptotic distribution
rarefied to the observed sample size equals the observed number of species in
the data. Used only for extrapolation.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The Abundance Frequency Count (Chao et al., 2015) is the number of species observed each num-
ber of times.
It is a way to summarize the species distribution.
It can be estimated at a specified level of interpolation or extrapolation. Extrapolation relies on the
estimation of the estimation of the asymptotic distribution of the community by as.ProbaVector
and eq. (5) of Chao et al. (2014).
Value
A two-column matrix. The first column contains the number of observations, the second one the
number of species observed this number of times.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>
(2014). Rarefaction and extrapolation with Hill numbers: A framework for sampling and estimation
in species diversity studies. Ecological Monographs, 84(1): 45-67.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2015) Unveiling the Species-
Rank Abundance Distribution by Generalizing Good-Turing Sample Coverage Theory. Ecology
96(5): 1189-1201.
See Also
PhyloEntropy, ChaoPD
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest
# and their taxonomy)
data(Paracou618)
# Ns is the vector of abundances of the first plot
Ns <- Paracou618.MC$Nsi[, 1]
# Return the abundance frequency count
(AbdFreqCount(Ns) -> afc)
plot(afc, xlab="Number of observations", ylab="Number of species")
lines(afc)
Accumulation Diversity accumulation.
Description
Diversity and Entropy Accumulation Curves represent the accumulation of entropy with respect to
the sample size.
Usage
as.AccumCurve(x, y, low = NULL, high = NULL)
is.AccumCurve(x)
EntAC(Ns, q = 0, n.seq = seq_len(sum(Ns)), PCorrection="Chao2015", Unveiling="geom",
RCorrection="Rarefy", NumberOfSimulations = 0, Alpha = 0.05,
ShowProgressBar = TRUE, CheckArguments = TRUE)
DivAC(Ns, q = 0, n.seq = seq_len(sum(Ns)), PCorrection="Chao2015", Unveiling="geom",
RCorrection="Rarefy", NumberOfSimulations = 0, Alpha = 0.05,
ShowProgressBar = TRUE, CheckArguments = TRUE)
## S3 method for class 'AccumCurve'
plot(x, ..., main = NULL,
xlab = "Sample Size", ylab = NULL, ylim = NULL,
LineWidth = 2, ShadeColor = "grey75", BorderColor = "red")
## S3 method for class 'AccumCurve'
autoplot(object, ..., main = NULL,
xlab = "Sample Size", ylab = NULL,
ShadeColor = "grey75", alpha = 0.3, BorderColor = "red",
col = ggplot2::GeomLine$default_aes$colour,
lty = ggplot2::GeomLine$default_aes$linetype,
lwd = ggplot2::GeomLine$default_aes$size)
Arguments
x An object. A numeric vector in as.AccumCurve.
object An object.
y A numeric vector.
low A numeric vector.
high A numeric vector.
Ns A numeric vector containing species abundances.
q A number: the order of diversity. Default is 1.
n.seq A sequence of numbers. Accumulation will be calculated at each value.
PCorrection A string containing one of the possible corrections to estimate a probability dis-
tribution in as.ProbaVector: "Chao2015" is the default value. Used only for
extrapolation and q different from 0, 1, 2.
Unveiling A string containing one of the possible unveiling methods to estimate the prob-
abilities of the unobserved species in as.ProbaVector: "geom" (geometric: the
unobserved species distribution is geometric) is the default value. Used only for
extrapolation and q different from 0, 1, 2.
RCorrection A string containing a correction recognized by Richness to evaluate the total
number of species in as.ProbaVector. "Rarefy" is the default value to esti-
mate the number of species such that the entropy of the asymptotic distribution
rarefied to the observed sample size equals the observed entropy of the data.
Used only for extrapolation and q different from 0, 1, 2. If q is 0 (extrapolation
of richness), "Rarefy" is taken for "Jackknife".
NumberOfSimulations
The number of Simulations to build confidence intervals.
Alpha The risk level, 5% by default.
... Additional arguments to be passed to plot. Unused elsewhere.
main The main title of the plot. if NULL (by default), there is no title.
xlab The X axis label, "Rank" by default.
ylab The Y axis label. if NULL (by default), "Probability" or "Abundance" is chosen
according to the object class.
ylim The interval of y values plotted.
LineWidth The width of the line that represents the actual profile.
ShadeColor The color of the shaded confidence envelope.
BorderColor The color of the bounds of the confidence envelope.
alpha Opacity of the confidence enveloppe, between 0 and 1.
col The color of the geom objects. See "Color Specification" in par.
lty The type of the lines. See lines.
lwd The width of the lines. See lines.
ShowProgressBar
If TRUE (default), a progress bar is shown.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
DivAC or EntAC estimate the diversity or entropy accumulation curve of a distribution. See Tsallis
for details about the computation of entropy at each level of interpolation and extrapolation. In accu-
mulation curves, extrapolation if done by estimating the asymptotic distribution of the community
and estimating entropy at different levels by interpolation. The asymptotic richess is adjusted so
that the extrapolated part of the accumulation joins the observed value at the sample size.
AccumCurve objects include EntAC and DivAC objects for entropy and diversity accumulation. They
generalize the classical Species Accumulation Curves (SAC) which are diversity accumulation of
order $q=0$.
as.AccumCurve transforms two vectors (where x is the sammple size and y the accumulation) into
an object of class AccumCurve.
AccumCurve objects can be plotted with either plot or autoplot methods.
Value
A DivAC or an EntAC object. Both are AccumCurve objects, which are a list:
x The sample size.
y The value of entropy or diversity.
low The lower bound of the confidence envelope of the estimation.
high The upper bound of the confidence envelope of the estimation.
Attibutes "Size" and "Value" contain the actual sample size and the corresponding diversity or
entropy.
AccumCurve objects can be summarized and plotted.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>
(2014). Rarefaction and extrapolation with Hill numbers: A framework for sampling and estimation
in species diversity studies. Ecological Monographs, 84(1): 45-67.
See Also
Tsallis, Diversity
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Accumulation curve of Simpson's diversity
autoplot(DivAC(Ns, q=2))
AllenH Phylogenetic Entropy of a Community
Description
Calculates the phylogenetic diversity of order q of a probability vector.
Usage
AllenH(Ps, q = 1, PhyloTree, Normalize = TRUE, Prune = FALSE, CheckArguments = TRUE)
Arguments
Ps A probability vector, summing to 1.
q A number: the order of entropy. Default is 1.
PhyloTree An object of class hclust, phylo, phylog or PPtree. The tree is not necessarily
ultrametric.
Normalize If TRUE (default), diversity is not affected by the height of the tree.
If FALSE, it is proportional to the height of the tree.
Prune What to do when somes species are in the tree but not in Ps?
If TRUE, the tree is pruned to keep species of Ps only. The height of the tree may
be changed if a pruned branch is related to the root.
If FALSE (default), species with probability 0 are added in Ps.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The phylogenetic entropy is calculated following Allen et al. (2009) for order q = 1 and Lein-
ster and Cobold (2011) for other orders.The result is identical to the total entropy calculated by
PhyloEntropy but it is much faster. A single value is returned instead of a PhyloEntropy object,
and no bias correction is available.
The Normalize argument allows normalizing entropy by the height of the tree, similarly to ChaoPD.
Diversity can be calculated for non ultrametric trees following Leinster and Cobold (2011) even
though the meaning of the result is not so clear.
Value
A named number equal the entropy of the community. The name is "None" to recall that no bias
correction is available.
References
<NAME>., <NAME>. and <NAME>. (2009). A New Phylogenetic Diversity Measure Generalizing
the Shannon Index and Its Application to Phyllostomid Bats. American Naturalist 174(2): 236-243.
<NAME>. and <NAME>. (2011). Measuring diversity: the importance of species similarity.
Ecology 93(3): 477-489.
See Also
PhyloEntropy, ChaoPD
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest
# and their taxonomy)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ns)
# Calculate the phylogenetic Shannon diversity of the plot
AllenH(Ps, 1, Paracou618.Taxonomy, Normalize=TRUE)
# Calculate it using PhyloEntropy: more powerful but much slower is the tree has many periods
PhyloEntropy(Ps, 1, Paracou618.Taxonomy, Normalize=TRUE) -> phyE
summary(phyE)
AlphaDiversity Reduced-bias alpha diversity of a metacommunity
Description
Calculates the eeduced-bias total alpha diversity of order q of communities.
Usage
AlphaDiversity(MC, q = 1, Correction = "Best", Tree = NULL, Normalize = TRUE,
Z = NULL, CheckArguments = TRUE)
Arguments
MC A MetaCommunity object.
q A number: the order of diversity. Default is 1 for Shannon diversity.
Correction A string containing one of the possible corrections accepted by AlphaEntropy
or "None" or "Best", the default value.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), diversity is not affected by the height of the tree.
If FALSE, diversity is proportional to the height of the tree.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Entropy is calculated by AlphaEntropy and transformed into diversity.
Value
An MCdiversity object containing diversity values of each community and of the metacommunity.
References
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
See Also
AlphaEntropy
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Calculate Simpson alpha diversity
summary(AlphaDiversity(Paracou618.MC, 2))
# Compare without correction
summary(AlphaDiversity(Paracou618.MC, 2, Correction = "None"))
# Estimate phylogenetic Simpson alpha diversity
summary(AlphaDiversity(Paracou618.MC, 2, Tree = Paracou618.Taxonomy) -> e)
plot(e)
AlphaEntropy Reduced-bias alpha entropy of a metacommunity
Description
Calculates the reduced-bias total alpha entropy of order q of communities.
Usage
AlphaEntropy(MC, q = 1, Correction = "Best", Tree = NULL, Normalize = TRUE,
Z = NULL, CheckArguments = TRUE)
Arguments
MC A MetaCommunity object.
q A number: the order of diversity. Default is 1 for Shannon entropy.
Correction A string containing one of the possible corrections accepted by the bias-corrected
entropy function (see details) or "None" or "Best", the default value.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), the entropy returned by the function is normalized by the
height of the tree (it is the weighted average value of the entropy in each slice).
If FALSE, it is the unnormalized weighted sum of the results.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
If Tree is not NULL, then phylogenetic entropy is calculated by bcPhyloEntropy; else, if Z is not
NULL, then similarity-based entropy is calculated by bcHqz; else, neutral entropy is calculated by
bcTsallis.
The alpha entropy of each community is calculated and summed according to community weights.
The possible corrections are detailed in Tsallis.
Value
An MCentropy object containing entropy values of each community and of the metacommunity.
References
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
See Also
bcTsallis
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Calculate Simpson alpha entropy
summary(AlphaEntropy(Paracou618.MC, 2))
# Compare without correction
summary(AlphaEntropy(Paracou618.MC, 2, Correction = "None"))
# Estimate phylogenetic Simpson alpha entropy
summary(AlphaEntropy(Paracou618.MC, 2, Tree = Paracou618.Taxonomy) -> e)
plot(e)
BetaDiversity Reduced-bias beta diversity of a metacommunity
Description
Calculates the reduced-bias beta diversity of order q between communities.
Usage
BetaDiversity(MC, q = 1, Correction = "Best", Tree = NULL, Normalize = TRUE,
Z = NULL, CheckArguments = TRUE)
Arguments
MC A MetaCommunity object.
q A number: the order of diversity. Default is 1 for Shannon diversity.
Correction A string containing one of the possible corrections accepted by bcTsallisBeta
or "None" or "Best", the default value.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), diversity is not affected by the height of the tree.
If FALSE, diversity is proportional to the height of the tree.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Entropy is calculated by BetaEntropy and transformed into diversity.
Diversity values of communities are not defined: community entropies are averaged to obtain the
metacommunity entropy wich is transformed into diversity (Marcon et al., 2014).
Value
An MCdiversity object containing diversity value of the metacommunity.
References
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
See Also
BetaEntropy
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Estimate Shannon beta diversity
summary(BetaDiversity(Paracou618.MC, 1))
# Compare without correction
summary(BetaDiversity(Paracou618.MC, 1, Correction = "None"))
# Estimate phylogenetic Shannon beta diversity
summary(BetaDiversity(Paracou618.MC, 1, Tree = Paracou618.Taxonomy) -> e)
BetaEntropy Reduced-bias beta entropy of a metacommunity
Description
Calculates the reduced-bias beta entropy of order q between communities.
Usage
BetaEntropy(MC, q = 1, Correction = "Best", Tree = NULL, Normalize = TRUE,
Z = NULL, CheckArguments = TRUE)
Arguments
MC A MetaCommunity object.
q A number: the order of diversity. Default is 1 for Shannon entropy.
Correction A string containing one of the possible corrections accepted by the bias-corrected
entropy function (see details) or "None" or "Best", the default value.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), the entropy returned by the function is normalized by the
height of the tree (it is the weighted average value of the entropy in each slice).
If FALSE, it is the unnormalized weighted sum of the results.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
If Tree is not NULL, then phylogenetic entropy is calculated by bcPhyloBetaEntropy; else, if
Z is not NULL, then similarity-based entropy is calculated by bcHqzBeta; else, neutral entropy is
calculated by bcTsallisBeta.
The reduced-bias beta entropy of each community is calculated and summed according to commu-
nity weights.
Note that beta entropy is related to alpha entropy (if q is not 1) and cannot be compared accross
communities (Jost, 2007). Do rather calculate the BetaDiversity of the metacommunity.
Value
An MCentropy object containing entropy values of each community and of the metacommunity.
References
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
See Also
bcTsallisBeta, BetaDiversity
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Estimate Shannon beta entropy
summary(BetaEntropy(Paracou618.MC, 1))
# Compare without correction
summary(BetaEntropy(Paracou618.MC, 1, Correction = "None"))
# Estimate phylogenetic Shannon beta entropy
summary(BetaEntropy(Paracou618.MC, 1, Tree = Paracou618.Taxonomy) -> e)
plot(e)
ChaoPD Phylogenetic Diversity of a Community
Description
Calculates the phylogenetic diversity of order q of a probability vector.
Usage
ChaoPD(Ps, q = 1, PhyloTree, Normalize = TRUE, Prune = FALSE, CheckArguments = TRUE)
Arguments
Ps A probability vector, summing to 1.
q A number: the order of diversity. Default is 1.
PhyloTree An object of class hclust, phylo, phylog or PPtree. The tree is not necessarily
ultrametric.
Normalize If TRUE (default), diversity is not affected by the height of the tree.
If FALSE, it is proportional to the height of the tree.
Prune What to do when somes species are in the tree but not in Ps?
If TRUE, the tree is pruned to keep species of Ps only. The height of the tree may
be changed if a pruned branch is related to the root.
If FALSE (default), species with probability 0 are added in Ps.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The phylogenetic diversity is calculated following Chao et al. (2010). The result is identical to
the total diversity calculated by PhyloDiversity but it is much faster. A single value is returned
instead of a PhyloDiversity object, and no bias correction is available.
The Normalize arguments allows calculating either q D̄(T ) (if TRUE) or q P D(T ) if FALSE.
Diversity can be calculated for non ultrametric trees following Chao et al. (2010) even though the
meaning of the result is not so clear (Leinster and Cobold, 2011).
Value
A named number equal the diversity of the community. The name is "None" to recall that no bias
correction is available.
References
<NAME>., <NAME>. and <NAME>. (2010). Phylogenetic diversity measures based on Hill numbers.
Philosophical Transactions of the Royal Society B 365(1558): 3599-609.
<NAME>. and <NAME>. (2011). Measuring diversity: the importance of species similarity.
Ecology 93(3): 477-489.
See Also
PhyloDiversity, AllenH
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest
# and their taxonomy)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- Paracou618.MC$Ps
# Calculate the phylogenetic Simpson diversity of the plot
(ChaoPD(Paracou618.MC$Ps, 2, Paracou618.Taxonomy, Normalize=TRUE))
# Calculate it using PhyloDiversity
# (more powerful but much slower if the tree has many periods)
PhyloDiversity(Paracou618.MC$Ps, 2, Paracou618.Taxonomy, Normalize=TRUE) -> phyD
summary(phyD)
CommunityProfile Diversity or Entropy Profile of a community
Description
Calculates the diversity or entropy profile of a community, applying a community function to a
vector of orders.
Usage
CommunityProfile(FUN, NorP, q.seq = seq(0, 2, 0.1),
NumberOfSimulations = 0, Alpha = 0.05, BootstrapMethod = "Chao2015",
size = 1, ..., ShowProgressBar = TRUE, CheckArguments = TRUE)
as.CommunityProfile(x, y, low = NULL, high = NULL, mid = NULL)
is.CommunityProfile(x)
## S3 method for class 'CommunityProfile'
plot(x, ..., main = NULL,
xlab = "Order of Diversity", ylab = "Diversity", ylim = NULL,
LineWidth = 2, ShadeColor = "grey75", BorderColor = "red")
## S3 method for class 'CommunityProfile'
autoplot(object, ..., main = NULL,
xlab = "Order of Diversity", ylab = "Diversity",
ShadeColor = "grey75", alpha = 0.3, BorderColor = "red",
col = ggplot2::GeomLine$default_aes$colour,
lty = ggplot2::GeomLine$default_aes$linetype,
lwd = ggplot2::GeomLine$default_aes$size)
CEnvelope(Profile, LineWidth = 2, ShadeColor = "grey75", BorderColor = "red", ...)
Arguments
FUN The function to be applied to each value of q.seq. Any function accepting a
numeric vector (or a two-column matrix) and a number as first two arguments
and an argument named CheckArguments is acceptable (other arguments of the
functions are passed by ...). See *Details* for useful entropy and diversity
functions and *Examples* for an ad-hoc one.
NorP A numeric vector. Contains either abundances or probabilities.
q.seq A numeric vector: the sequence of diversity orders to address. Default is from 0
to 2.
NumberOfSimulations
The number of simulations to run, 0 by default.
Alpha The risk level, 5% by default.
BootstrapMethod
The method used to obtain the probabilities to generate bootstrapped communi-
ties from observed abundances. See rCommunity.
size The size of simulated communities used to compute the bootstrap confidence
envelope. 1 (default) means that the actual size must be used.
object An object.
x An object to be tested or plotted or the vector of orders of community profiles
in as.CommunityProfile.
y Entropy or diversity values of each order, corresponding to x values.
low Entropy or diversity lower bound of the confidence envelope, corresponding to
x values.
high Entropy or diversity higher bound of the confidence envelope, corresponding to
x values.
mid Entropy or diversity center value (usually the mean) of the confidence envelope,
corresponding to x values.
Profile An CommunityProfile to be plotted.
... Additional arguments to be passed to FUN in CommunityProfile, to plot in
plot.CommunityProfile or to lines in CEnvelope.
main The main title of the plot.
xlab The x axis label of the plots.
ylab The y axis label of the plot.
ylim The interval of y values plotted.
LineWidth The width of the line that represents the actual profile.
ShadeColor The color of the shaded confidence envelope.
BorderColor The color of the bounds of the confidence envelope.
alpha Opacity of the confidence enveloppe, between 0 and 1.
col The color of the geom objects. See "Color Specification" in par.
lty The type of the lines. See lines.
lwd The width of the lines. See lines.
ShowProgressBar
If TRUE (default), a progress bar is shown.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The function CommunityProfile is used to calculate diversity or entropy profiles based on com-
munity functions such as Tsallis or ChaoPD. The first two arguments of the function must be a
probability or abundance vector and a number (q). Additional arguments cannot be checked. Unex-
pected results may be returned if FUN is not used properly.
If NumberOfSimulations is greater than 0, a bootstrap confidence interval is produced by simu-
lating communities with rCommunity and calculating their profiles. The size of those communities
may be that of the actual community or specified by size. Simulating communities implies a
downward bias in the estimation: rare species of the actual community may have abundance zero
in simulated communities. Simulated diversity values are recentered if ‘size = 1‘ so that their mean
is that of the actual community. Else, it is assumed that the bias is of interest and must not be
corrected.
CommunityProfile objects can be plotted. They can also be added to the current plot by CEnvelope.
Value
A CommunityProfile, which is a list:
x The order q values
y The entropy or diversity values returned by FUN
low The lower bound of the confidence interval
high The upper bound of the confidence interval
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Plot diversity estimated without bias correction
plot(CommunityProfile(Diversity, Paracou618.MC$Ps, seq(0, 2, 0.2)),
lty=3, ylim=c(50, 350))
# Estimate diversity, with a condidence envelope
# (only 10 simulations to save time, should be 1000)
Profile <- CommunityProfile(Diversity, as.AbdVector(Paracou618.MC$Ns),
seq(0, 2, 0.2), Correction="UnveilJ", NumberOfSimulations=10)
# Complete the plot, and add the legend
CEnvelope(Profile, main="Paracou Plots Diversity")
legend("topright", c("Bias Corrected", "Biased"), lty=c(1,3), inset=0.01)
# Advanced use with beta-diversity functions :
# Profile of the beta entropy of the first community of Paracou618.
# Observed and expected probabilities are bound into a 2-column matrix
# An intermediate function is necessary to separate them before calling TsallisBeta
# The CheckArguments is mandatory but does not need to be set: CommunityProfile() sets it to FALSE
CommunityProfile(function(PandPexp, q, CheckArguments)
{TsallisBeta(PandPexp[, 1], PandPexp[, 2], q)},
NorP=cbind(Paracou618.MC$Psi[, 1], Paracou618.MC$Ps), q.seq=seq(0, 2, 0.2))
Coverage Sample coverage of a community
Description
"Coverage" calculates an estimator of the sample coverage of a community described by its abun-
dance vector. "Coverage2Size" estimates the sample size corresponding to the chosen sample
coverage.
Usage
Coverage(Ns, Estimator = "Best", Level = NULL, CheckArguments = TRUE)
Coverage2Size(Ns, SampleCoverage, CheckArguments = TRUE)
Arguments
Ns A numeric vector containing species abundances.
Estimator A string containing one of the possible estimators: "ZhangHuang", "Chao",
"Turing", "Good". "Best" is for "ZhangHuang".
Level The level of interpolation or extrapolation, i.e. an abundance.
SampleCoverage The target sample coverage.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The sample coverage C of a community is the total probability of occurence of the species observed
in the sample. 1 − C is the probability for an individual of the whole community to belong to a
species that has not been sampled.
The historical estimator is due to Turing (Good, 1953). It only relies on singletons (species observed
only once). Chao’s (Chao and Shen, 2010) estimator uses doubletons too and Zhang-Huang’s (Chao
et al., 1988; Zhang and Huang, 2007) uses the whole distribution.
If Level is not null, the sample coverage is interpolated or extrapolated. Interpolation by the Good
estimator relies on the equality between sampling deficit and the generalized Simpson entropy
(Good, 1953). The Chao (2014) estimator allows extrapolation, reliable up a level equal to the
double size of the sample.
Value
"Coverage" returns a named number equal to the calculated sample coverage. The name is that of
the estimator used. "Coverage2Size" returns a number equal to the sample size corresponding to
the chosen sample coverage.
References
<NAME>., <NAME>. and <NAME>. (1988). A generalized Good’s nonparametric coverage esti-
mator. Chinese Journal of Mathematics 16: 189-199.
<NAME>. and <NAME>. (2010). Program SPADE: Species Prediction And Diversity Estimation.
Program and user’s guide. CARE, Hsin-Chu, Taiwan.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>
(2014). Rarefaction and extrapolation with Hill numbers: A framework for sampling and estimation
in species diversity studies. Ecological Monographs, 84(1): 45-67.
<NAME>. (1953). On the Population Frequency of Species and the Estimation of Population
Parameters. Biometrika 40(3/4): 237-264.
<NAME>. and <NAME>. (2007). Turing’s formula revisited. Journal of Quantitative Linguistics
14(2-3): 222-241.
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the vector of abundances of the metacommunity
Ns <- Paracou618.MC$Ns
# Calculate the sample coverage of the metacommunity
Coverage(Ns) # Stored in Paracou618.SampleCoverage
Diversity Hill number of a community
Description
Calculates the HCDT (generalized) diversity of order q of a probability vector.
Usage
Diversity(NorP, q = 1, ...)
bcDiversity(Ns, q = 1, Correction = "Best", CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
Diversity(NorP, q = 1, ...,
CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
Diversity(NorP, q = 1, Correction = "Best", Level = NULL,
PCorrection="Chao2015", Unveiling="geom", RCorrection="Rarefy", ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
Diversity(NorP, q = 1, Correction = "Best", Level = NULL,
PCorrection="Chao2015", Unveiling="geom", RCorrection="Rarefy", ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
Diversity(NorP, q = 1, Correction = "Best", Level = NULL,
PCorrection="Chao2015", Unveiling="geom", RCorrection="Rarefy", ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
q A number: the order of diversity. Default is 1.
Correction A string containing one of the possible asymptotic estimators: "None" (no cor-
rection), "ChaoShen", "GenCov", "Grassberger", "Holste", "Bonachela",
"ZhangGrabchak", or "ChaoJost", "Marcon", "UnveilC", "UnveiliC", "UnveilJ"
or "Best", the default value. Currently, "Best" is "UnveilJ".
Level The level of interpolation or extrapolation. It may be an a chosen sample size
(an integer) or a sample coverage (a number between 0 and 1).
PCorrection A string containing one of the possible corrections to estimate a probability dis-
tribution in as.ProbaVector: "Chao2015" is the default value. Used only for
extrapolation.
Unveiling A string containing one of the possible unveiling methods to estimate the proba-
bilities of the unobserved species in as.ProbaVector: "geom" (the unobserved
species distribution is geometric) is the default value. Used only for extrapola-
tion.
RCorrection A string containing a correction recognized by Richness to evaluate the total
number of species in as.ProbaVector. "Rarefy" is the default value to esti-
mate the number of species such that the diversity of the asymptotic distribution
rarefied to the observed sample size equals the observed diversity of the data.
Used only for extrapolation.
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Diversity calls Tsallis to calculate entropy and transforms it into diversity by calculating its
deformed exponential.
Bias correction requires the number of individuals to estimate sample Coverage. See Tsallis for
details.
The functions are designed to be used as simply as possible. Diversity is a generic method. If its
first argument is an abundance vector, an integer vector or a numeric vector which does not sum to
1, the bias corrected function bcDiversity is called.
Diversity can be estimated at a specified level of interpolation or extrapolation, either a chosen
sample size or sample coverage (Chao et al., 2014), rather than its asymptotic value. See Tsallis
for details.
Value
A named number equal to the calculated diversity. The name is that of the bias correction used.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>
(2014). Rarefaction and extrapolation with Hill numbers: A framework for sampling and estimation
in species diversity studies. Ecological Monographs, 84(1): 45-67.
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
See Also
Tsallis, expq, AbdVector, ProbaVector
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Species probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ns)
# Whittaker plot
plot(Ns)
# Calculate diversity of order 1, i.e. Shannon's diversity
Diversity(Ps, q=1)
# Calculate it with estimation bias correction (asymptotic estimator)
Diversity(Ns, q=1)
# Extrapolate it up to 99.9% sample coverage (close to the asymptotic estimator)
Diversity(Ns, q=1, Level=0.999)
# Rarefy it to half the sample size
Diversity(Ns, q=1, Level=sum(Ns)/2)
DivEst Diversity Estimation of a metacommunity
Description
Estimates diversity of a metacommunity.
Usage
DivEst(q = 0, MC, Biased = TRUE, Correction = "Best", Tree = NULL,
Normalize = TRUE, Z = NULL, Simulations = 100,
ShowProgressBar = TRUE, CheckArguments = TRUE)
is.DivEst(x)
## S3 method for class 'DivEst'
plot(x, ..., main = NULL, Which = "All",
Quantiles = c(0.025, 0.975), colValue = "red", lwdValue = 2, ltyValue = 2,
colQuantiles = "black", lwdQuantiles = 1, ltyQuantiles = 2)
## S3 method for class 'DivEst'
autoplot(object, ..., main = NULL, Which = "All",
labels = NULL, font.label = list(size=11, face="plain"),
Quantiles = c(0.025, 0.975), colValue = "red",
colQuantiles = "black", ltyQuantiles = 2)
## S3 method for class 'DivEst'
summary(object, ...)
Arguments
q A number: the order of diversity.
MC A MetaCommunity object.
Biased Logical; if FALSE, a bias correction is appplied.
Correction A string containing one of the possible corrections. The correction must be
accepted by DivPart. "Best" is the default value.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), diversity is not affected by the height of the tree..
If FALSE, diversity is proportional to the height of the tree.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1.
Simulations The number of simulations to build confidence intervals.
ShowProgressBar
If TRUE (default), a progress bar is shown.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
x An object to be tested or plotted.
main The title of the plot.
Which May be "Alpha", "Beta" or "Gamma" to respectively plot the metacommunity’s
alpha, beta or gamma diversity. If "All" (default), all three plots are shown.
labels Vector of labels to be added to multiple plots. "auto" is the same as c("a",
"b", "c", "d)".
font.label A list of arguments to customize labels. See ggarrange.
object A MCdiversity object to be summarized or plotted.
Quantiles A vector containing the quantiles of interest.
colValue The color of the line representing the real value on the plot.
lwdValue The width of the line representing the real value on the plot.
ltyValue The line type of the line representing the real value on the plot.
colQuantiles The color of the lines representing the quantiles on the plot.
lwdQuantiles The width of the lines representing the quantiles on the plot.
ltyQuantiles The line type of the lines representing the quantiles on the plot.
... Additional arguments to be passed to the generic methods.
Details
Divest estimates the diversity of the metacommunity and partitions it into alpha and beta compo-
nents.
If Tree is provided, the phylogenetic diversity is calculated else if Z is not NULL, then similarity-
based entropy is calculated.
Bootstrap confidence intervals are calculated by drawing simulated communities from a multino-
mial distribution following the observed frequencies (Marcon et al, 2012; 2014).
Value
A Divest object which is a DivPart object with an additional item in its list:
SimulatedDiversity
A matrix containing the simulated values of alpha, beta and gamma diversity.
Divest objects can be summarized and plotted.
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2012). The Decomposition of Shannon’s
Entropy and a Confidence Interval for Beta Diversity. Oikos 121(4): 516-522.
<NAME>., <NAME>., <NAME>., <NAME>. and Lang, G. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
See Also
DivPart
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Estimate Shannon diversity.
Estimation <- DivEst(q = 1, Paracou618.MC, Biased = FALSE, Correction = "UnveilJ",
Simulations = 20)
plot(Estimation)
summary(Estimation)
DivPart Diversity Partition of a metacommunity
Description
Partitions the diversity of a metacommunity into alpha and beta components.
Usage
DivPart(q = 1, MC, Biased = TRUE, Correction = "Best", Tree = NULL,
Normalize = TRUE, Z = NULL, CheckArguments = TRUE)
is.DivPart(x)
## S3 method for class 'DivPart'
plot(x, ...)
## S3 method for class 'DivPart'
autoplot(object, col = ggplot2::GeomRect$default_aes$fill,
border = ggplot2::GeomRect$default_aes$colour, ...)
## S3 method for class 'DivPart'
summary(object, ...)
Arguments
q A number: the order of diversity. Default is 1.
MC A MetaCommunity object.
Biased Logical; if FALSE, a bias correction is appplied.
Correction A string containing one of the possible corrections.
The correction must be accepted by AlphaEntropy, BetaEntropy and GammaEntropy.
"Best" is the default value.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), diversity is not affected by the height of the tree.
If FALSE, diversity is proportional to the height of the tree.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
x An object to be tested or plotted.
object A MCdiversity object to be summarized or plotted.
col The color used to fill the bars. See "Color Specification" in par.
border The color of the borders around the bars. See rect.
... Additional arguments to be passed to the generic methods.
Details
DivPart partitions the diversity of the metacommunity into alpha and beta components. It supports
estimation-bias correction.
If Tree is provided, the phylogenetic diversity is calculated else if Z is not NULL, then similarity-
based entropy is calculated.
Beta diversity/entropy is calculated from Gamma and Alpha when bias correction is required, so
community values are not available.
Value
A DivPart object. It is a list:
MetaCommunity The name of the MetaCommunity object containing inventory data.
Order The value of q.
Biased Logical. If FALSE, bias corrected values of diversity have been computed.
Correction The estimation bias correction used to calculate diversity.
Method The method used to calculate entropy ("HCDT", "Similarity-based").
Tree The phylogenetic or functional tree used to calculate phylodiversity.
Normalized Logical. Indicates whether phylodiversity is normalized or proportional to the
height of the tree.
Z The matrix used to calculate similarity-based entropy.
TotalAlphaDiversity
The alpha diversity of communities.
TotalBetaDiversity
The beta diversity of communities.
GammaDiversity The gamma diversity of the metacommunity.
CommunityAlphaDiversities
A vector containing the alpha diversity of each community.
TotalAlphaEntropy
The alpha entropy of communities.
TotalBetaEntropy
The beta entropy of communities.
GammaEntropy The gamma entropy of the metacommunity.
CommunityAlphaEntropies
A vector containing the alpha entropy of each community.
DivPart objects can be summarized and plotted.
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2012). The Decomposition of Shannon’s
Entropy and a Confidence Interval for Beta Diversity. Oikos 121(4): 516-522.
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
See Also
DivProfile
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Estimate Shannon diversity.
summary(DivPart(q = 1, Paracou618.MC, Biased = FALSE) -> dp)
plot(dp)
DivProfile Diversity Profile of a metacommunity
Description
Calculate the diversity profiles (alpha, beta, gamma) of a metacommunity.
Usage
DivProfile(q.seq = seq(0, 2, 0.1), MC, Biased = TRUE, Correction = "Best",
Tree = NULL, Normalize = TRUE, Z = NULL,
NumberOfSimulations = 0, Alpha = 0.05,
ShowProgressBar = TRUE, CheckArguments = TRUE)
is.DivProfile(x)
## S3 method for class 'DivProfile'
plot(x, ..., main = NULL, xlab = "Order of Diversity",
ylab = NULL, Which = "All",
LineWidth = 2, ShadeColor = "grey75", BorderColor = "red")
## S3 method for class 'DivProfile'
autoplot(object, ..., main = NULL, xlab = "Order of Diversity",
ylab = NULL, Which = "All", ShadeColor = "grey75", alpha = 0.3, BorderColor = "red",
labels = NULL, font.label = list(size=11, face="plain"),
col = ggplot2::GeomLine$default_aes$colour,
lty = ggplot2::GeomLine$default_aes$linetype,
lwd = ggplot2::GeomLine$default_aes$size)
## S3 method for class 'DivProfile'
summary(object, ...)
Arguments
q.seq A numeric vector.
MC A MetaCommunity object.
Biased Logical; if FALSE, a bias correction is appplied.
Correction A string containing one of the possible corrections.
The correction must be accepted by AlphaEntropy, BetaEntropy and GammaEntropy.
"Best" is the default value.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), diversity is not affected by the height of the tree.
If FALSE, diversity is proportional to the height of the tree.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1.
NumberOfSimulations
The number of simulations to run, 0 by default.
Alpha The risk level, 5% by default.
ShowProgressBar
If TRUE (default), a progress bar is shown.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
x An object to be tested or plotted.
main The main title of the plot. Ignored if Which = "All".
xlab The x axis label of the plots.
ylab The y axis label of the plot. Ignored if Which = "All".
Which May be "Communities", "Alpha", "Beta" or "Gamma" to respectively plot the
alpha diversity of communities or the metacommunity’s alpha, beta or gamma
diversity. If "All" (default), all four plots are shown.
LineWidth The width of the line that represents the actual profile.
ShadeColor The color of the shaded confidence envelope.
BorderColor The color of the bounds of the confidence envelope.
alpha Opacity of the confidence enveloppe, between 0 and 1.
labels Vector of labels to be added to multiple plots. "auto" is the same as c("a",
"b", "c", "d)".
font.label A list of arguments to customize labels. See ggarrange.
col The color of the geom objects. See "Color Specification" in par.
lty The type of the lines. See lines.
lwd The width of the lines. See lines.
object A MCdiversity object to be summarized or plotted.
... Additional arguments to be passed to the generic methods.
Details
If Tree is provided, the phylogenetic diversity is calculated.
DivPart partitions the diversity of the metacommunity into alpha and beta components. It supports
estimation-bias correction.
If Tree is provided, the phylogenetic diversity is calculated else if Z is not NULL, then similarity-
based entropy is calculated.
Beta diversity/entropy is calculated from Gamma and Alpha when bias correction is required, so
community values are not available.
If NumberOfSimulations is greater than 0, a bootstrap confidence interval is produced by simulat-
ing communities from a multinomial distribution following the observed frequencies (Marcon et al,
2012; 2014) and calculating their profiles.
Value
A DivProfile object. It is a list:
MetaCommunity The name of the MetaCommunity object containing inventory data.
Order A vector containing the values of q.
Biased Logical. If FALSE, bias corrected values of diversity have been computed.
Correction The estimation bias correction used to calculate diversity. Usually a string, but
it may be a list if different corrections have been used in the estimation of phy-
lodiversity.
Method The method used to calculate entropy ("HCDT", "Similarity-based").
Tree The phylogenetic or functional tree used to calculate phylodiversity.
Normalized Logical. Indicates whether phylodiversity is normalized or proportional to the
height of the tree.
Z The matrix used to calculate similarity-based entropy.
CommunityAlphaDiversities
A matrix containing the alpha diversity of each community.
TotalAlphaDiversity
A vector containing the alpha diversity of communities for each order.
BetaDiversity A vector containing the beta diversity of communities for each order.
GammaDiversity A vector containing the gamma diversity of the metacommunity for each order.
CommunityAlphaEntropies
A matrix containing the alpha entropy of each community.
TotalAlphaEntropy
A vector containing the alpha entropy of communities for each order.
BetaEntropy A vector containing the beta entropy of communities for each order.
GammaEntropy A vector containing the gamma entropy of the metacommunity for each order.
Confidence envelopes
Total Alpha, Beta and Gamma Entropy and Diversity may come with a confi-
dence envelope whose value is stored in twelve more vectors named suffixed
Low or High, such as GammaEntropyLow
DivProfile objects can be summarized and plotted.
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2012). The Decomposition of Shannon’s
Entropy and a Confidence Interval for Beta Diversity. Oikos 121(4): 516-522.
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
See Also
DivPart
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Estimate diversity.
Profile <- DivProfile(q.seq = seq(0, 2, 0.1), Paracou618.MC, Biased = FALSE)
plot(Profile)
autoplot(Profile)
summary(Profile)
Dqz Similarity-based diversity of a community
Description
Calculates the diversity of order q of a probability vector according to a similarity matrix.
Usage
Dqz(NorP, q = 1, Z = diag(length(NorP)), ...)
bcDqz(Ns, q = 1, Z = diag(length(Ns)), Correction = "Best", CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
Dqz(NorP, q = 1, Z = diag(length(NorP)), ...,
CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
Dqz(NorP, q = 1, Z = diag(length(NorP)), Correction = "Best", ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
Dqz(NorP, q = 1, Z = diag(length(NorP)), Correction = "Best", ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
Dqz(NorP, q = 1, Z = diag(length(NorP)), Correction = "Best", ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
q A number: the order of diversity. Default is 1.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1. Default is the
identity matrix to calculate neutral diversity.
Correction A string containing one of the possible corrections: "None" (no correction),
"HorvitzThomson", "MarconZhang" or "Best", the default value. The "MarconZhang"
correction assumes a similarity matrix.
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Diversity is calculated following Leinster and Cobbold (2012): it is the reciprocal of the (general-
ized) average (of order q) of the community species ordinariness.
A similarity matrix is used (as for Dqz), not a distance matrix as in Ricotta and Szeidl (2006). See
the example.
Bias correction requires the number of individuals. Use bcHqz and choose the Correction. Cor-
rection techniques are from Marcon et al. (2014).
Currently, the "Best" correction is the max value of "HorvitzThomson" and "MarconZhang".
The functions are designed to be used as simply as possible. Dqz is a generic method. If its first
argument is an abundance vector, an integer vector or a numeric vector which does not sum to
1, the bias corrected function bcDqz is called. Explicit calls to bcDqz (with bias correction) or
to Dqz.ProbaVector (without correction) are possible to avoid ambiguity. The .integer and
.numeric methods accept Ps or Ns arguments instead of NorP for backward compatibility.
Value
A named number equal to the calculated diversity. The name is that of the bias correction used.
References
<NAME>. and <NAME>. (2012). Measuring diversity: the importance of species similarity.
Ecology 93(3): 477-489.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
See Also
Hqz, PhyloDiversity
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Prepare the similarity matrix
DistanceMatrix <- as.matrix(Paracou618.dist)
# Similarity can be 1 minus normalized distances between species
Z <- 1 - DistanceMatrix/max(DistanceMatrix)
# Calculate diversity of order 2
Dqz(Paracou618.MC$Ns, 2, Z)
EightSpAbundance Abundances of 8 species to run examples.
Description
This dataset is a light-weight example.
Usage
data(Paracou618)
Format
A named vector.
Examples
data(Paracou618)
EightSpAbundance
EightSpTree Functional tree with 8 species.
Description
This dataset is a leight-weight example.
Usage
data(Paracou618)
Format
An object of class phylog containing a functional tree.
Examples
data(Paracou618)
# Preprocess the tree to be able to plot it
# without loading ade4 package
plot(Preprocess.Tree(EightSpTree), hang=-0.01)
Enq Grassberger’s expectation of n^q
Description
Expected value of nq when n follows a Poisson law.
Usage
Enq(n, q)
Arguments
n A positive integer vector.
q A positive number.
Details
The expectation of nq when n follows a Poisson ditribution has been derived by Grassberger (1988).
Value
A vector of the same length as n containing the transformed values.
Note
The function is computed using the beta.function.
Its value is 0 for n − q + 1 < 0.
References
Grassberger, P. (1988). Finite sample corrections to entropy and dimension estimates. Physics
Letters A 128(6-7): 369-373.
Examples
# Compare
n <- c(2,3)
Enq(n, q=2)
# with
n^2
# Result is 1
Enq(n, q=0)
# Result is 0
Enq(n, q=5)
EntropyCI Entropy of Monte-Carlo simulated communities
Description
Resamples a community by Monte-Carlo simulations of a multinomial distribution and returns a
vector of entropy values to calculate confidence intervals.
Usage
EntropyCI(FUN, Simulations = 100, Ns, BootstrapMethod = "Chao2015",
ShowProgressBar = TRUE, ..., CheckArguments = TRUE)
Arguments
FUN The entropy function to be applied to each simulated community. May be any
entropy function accepting a vector of species abundances, such as bcTsallis,
bcShannon, bcSimpson or bcPhyloEntropy.
Simulations The number of simulations to build confidence intervals.
Ns A numeric vector containing species abundances.
BootstrapMethod
The method used to obtain the probabilities to generate bootstrapped communi-
ties from observed abundances. See rCommunity.
... Additional arguments to be passed to FUN.
ShowProgressBar
If TRUE (default), a progress bar is shown.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
This function is used to obtain the distribution of entropy and eventually calculate confidence inter-
vals. It draws simulated communities according to a multinomial distribution with the same number
of individuals and probabilities as the actual community. It calculates the entropy of each simulated
community. Last, it recenters the distribution of entropy values arounf the actual value of entropy
according to Marcon et al. (2012): the estimation bias of simulated communities entropy can not
be corrected analytically, but it does not affect the distribution shape.
Diversity can not be recentered this way so diversity function should not be used. Unexpected
results will be obtained if inappropriate functions are used.
Value
A numeric vector containing the entropy value of each simulated community.
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2012). The Decomposition of Shannon’s
Entropy and a Confidence Interval for Beta Diversity. Oikos 121(4): 516-522.
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Abundance (all estimators will include bias corrrection)
Ns <- as.AbdVector(Paracou618.MC$Ns)
q <- 1
# Estimate entropy and transform it into diversity
RealEst <- expq(Tsallis(Ns, q), q)
# Transform the distribution of Tsallis entropy into diversity
SimulatedDiversity <- expq(EntropyCI(Tsallis, Simulations=50, Ns, q=q), q)
# Figure
plot(density(SimulatedDiversity), col="black", lwd=2, main="", xlab ="Diversity")
abline(v=RealEst, col="red", lwd=2, lty=2)
abline(v=quantile(SimulatedDiversity, probs = 0.025), col="black", lwd=1, lty=3)
abline(v=quantile(SimulatedDiversity, probs = 0.975), col="black", lwd=1, lty=3)
legend("topright", c("Real value", "Confidence interval"), lty=c(2,3),
col=c("red", "black"), inset=0.01)
# Print results
cat("Estimated Diversity:", RealEst)
quantile(SimulatedDiversity, probs = c(0.025, 0.975))
expq Exponential of order q
Description
Calculates the deformed exponential of order q.
Usage
expq(x, q)
expq.CommunityProfile(Profile)
Arguments
x A numeric vector.
Profile A CommunityProfile.
q A number.
Details
The deformed exponential is defined as (x(1 − q) + 1) (1−q) .
1 1
For q > 1, lnq (+∞) = (q−1) so expq (x) is not defined for x > (q−1) .
expq.CommunityProfile calculates the deformed exponential of a CommunityProfile. Its $x item
(the order of diversity) is kept unchanged whilst other items are set to their exponential of order $x.
Thus, an entropy profile is transformed into a diversity profile.
Value
A vector of the same length as x containing the transformed values or a CommunityProfile.
References
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>. (1994). What are the numbers that experiments provide? Quimica Nova 17(6): 468-471.
See Also
expq
Examples
curve(exp(x), -5, 0, lty=3)
curve(expq(x, 2), -5, 0, lty=2, add=TRUE)
curve(expq(x, 3), -5, 0, lty=1, add=TRUE)
legend("topleft", legend = c("exp(x)", "exp2(x)", "exp3(x)"), lty = c(1, 2, 3), inset=0.02)
GammaDiversity Reduced-bias gamma diversity of a metacommunity
Description
Calculates the reduced-bias diversity of order q of a metacommunity.
Usage
GammaDiversity(MC, q = 1, Correction = "Best", Tree = NULL, Normalize = TRUE,
Z = NULL, CheckArguments = TRUE)
Arguments
MC A MetaCommunity object.
q A number: the order of diversity. Default is 1.
Correction A string containing one of the possible corrections accepted by AlphaEntropy
or "None" or "Best", the default value.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), diversity is not affected by the height of the tree.
If FALSE, diversity is proportional to the height of the tree.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Entropy is calculated by GammaEntropy and transformed into diversity.
Value
The metacommunity’s gamma entropy.
References
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
See Also
GammaEntropy
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Calculate Simpson gamma diversity
GammaDiversity(Paracou618.MC, 2)
# Compare without correction
GammaDiversity(Paracou618.MC, 2, Correction = "None")
# Estimate phylogenetic Simpson gamma diversity
GammaDiversity(Paracou618.MC, 2, Tree = Paracou618.Taxonomy)
GammaEntropy Reduced-bias gamma entropy of a metacommunity
Description
Calculates the reduced-bias Tsallis entropy of order q of a metacommunity.
Usage
GammaEntropy(MC, q = 1, Correction = "Best", Tree = NULL, Normalize = TRUE,
Z = NULL, PhyloDetails = FALSE, CheckArguments = TRUE)
Arguments
MC A MetaCommunity object.
q A number: the order of entropy. Default is 1.
Correction A string containing one of the possible corrections accepted by the bias-corrected
entropy function (see details) or "None" or "Best", the default value.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), the entropy returned by the function is normalized by the
height of the tree (it is the weighted average value of the entropy in each slice).
If FALSE, it is the unnormalized weighted sum of the results.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1.
PhyloDetails If FALSE (default), the function always returns a number. If TRUE and Tree is
not Tree is not NULL then a PhyloValue object is returned with all details. That
is used internally by DivPart to obtain the corrections used to estimate gamma
entropy along the tree and apply them to the estimation of alpha diversity.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
If Tree is not NULL, then phylogenetic entropy is calculated by bcPhyloEntropy.
Else, if Z is not NULL, then similarity-based entropy is calculated by bcHqz.
Else, neutral entropy is calculated by bcTsallis.
Value
A number equal to the calculated entropy.
References
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
See Also
bcTsallis, bcPhyloEntropy
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Calculate Simpson gamma entropy
GammaEntropy(Paracou618.MC, 2)
# Compare without correction
GammaEntropy(Paracou618.MC, 2, Correction = "None")
# Estimate phylogenetic Simpson gamma entropy
GammaEntropy(Paracou618.MC, 2, Tree = Paracou618.Taxonomy)
GenSimpson Generalized Simpson’s Entropy and Diversity
Description
Calculates the Generalized Simpson’s entropy of order r of a probability or abundance vector, and
its effective number of species.
Usage
GenSimpson(NorP, r = 1, ...)
bcGenSimpson(Ns, r = 1, CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
GenSimpson(NorP, r = 1, ...,
CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
GenSimpson(NorP, r = 1, ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
GenSimpson(NorP, r = 1, ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
GenSimpson(NorP, r = 1, ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL)
GenSimpsonD(NorP, r = 1, ...)
bcGenSimpsonD(Ns, r = 1, CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
GenSimpsonD(NorP, r = 1, ...,
CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
GenSimpsonD(NorP, r = 1, ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
GenSimpsonD(NorP, r = 1, ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
GenSimpsonD(NorP, r = 1, ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
r A number: the order of diversity. Default is 1 for Simpson’s diversity.
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The Generalized Simpson’s Entropy (Zhang and Zhou, 2010) of order r is, in the species accumu-
lation curve, the probability for the individual sampled in rank r + 1 to belong to a new species. It
is a measure of diversity so long as r is lower than the number of species (Grabchak et al., 2016).
Bias correction requires the number of individuals. Use bcGenSimpson. It is limited to orders r less
than or equal to the number of individuals in the community.
The effective number of species GenSimpsonD (explicit diversity) has been derived by Grabchak et
al. (2016).
The functions are designed to be used as simply as possible. GenSimpson is a generic method. If
its first argument is an abundance vector, an integer vector or a numeric vector which does not sum
to 1, the bias corrected function bcGenSimpson is called. Explicit calls to bcGenSimpson (with bias
correction) or to GenSimpson.ProbaVector (without correction) are possible to avoid ambiguity.
The .integer and .numeric methods accept Ps or Ns arguments instead of NorP for backward
compatibility.
Value
A named number equal to the calculated index or diversity. The name is either "Biased" or "Unbi-
ased", depending on the estimator used.
Note
The unbiased estimator is calculated by the GenSimp.z function of the EntropyEstimation pack-
age.
References
<NAME>., <NAME>., <NAME>., and <NAME>. (2017). The Generalized Simpson’s Entropy is
a Measure of Biodiversity. Plos One, 12(3): e0173305.
<NAME>. and <NAME>. (2010). Re-parameterization of multinomial distributions and diversity in-
dices. Journal of Statistical Planning and Inference 140(7): 1731-1738.
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Species probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ns)
# Whittaker plot
plot(Ns)
# Calculate GenSimpson entropy of order 1, equal to Simpson's index of diversity
GenSimpson(Ps, 1)
# Calculate an unbiased estimator of GenSimpson diversity of order 100
GenSimpsonD(Ns, 100)
Hqz Similarity-based entropy of a community
Description
Calculates the entropy of order q of a probability vector according to a similarity matrix.
Usage
Hqz(NorP, q = 1, Z = diag(length(NorP)), ...)
bcHqz(Ns, q = 1, Z = diag(length(Ns)), Correction = "Best", SampleCoverage = NULL,
CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
Hqz(NorP, q = 1, Z = diag(length(NorP)),
..., CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
Hqz(NorP, q = 1, Z = diag(length(NorP)), Correction = "Best",
..., CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
Hqz(NorP, q = 1, Z = diag(length(NorP)), Correction = "Best",
..., CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
Hqz(NorP, q = 1, Z = diag(length(NorP)), Correction = "Best",
..., CheckArguments = TRUE, Ps = NULL, Ns = NULL)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
q A number: the order of entropy. Default is 1.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1. Default is the
identity matrix to calculate neutral entropy.
Correction A string containing one of the possible corrections: "None" (no correction),
"ChaoShen", "MarconZhang" or "Best", the default value. The "MarconZhang"
correction assumes a similarity matrix.
SampleCoverage The sample coverage of Ns calculated elsewhere. Used to calculate the gamma
diversity of meta-communities, see details.
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Entropy is calculated following Leinster and Cobbold (2012) after Ricotta and Szeidl (2006): it is
the entropy of order q of the community, using species ordinariness as the information function.
A similarity matrix is used (as for Dqz), not a distance matrix as in Ricotta and Szeidl (2006). See
the example.
Bias correction requires the number of individuals. Use bcHqz and choose the Correction. Cor-
rection techniques are from Marcon et al. (2014).
Currently, the "Best" correction is the max value of "ChaoShen" and "MarconZhang".
The functions are designed to be used as simply as possible. Hqz is a generic method. If its first
argument is an abundance vector, an integer vector or a numeric vector which does not sum to
1, the bias corrected function bcHqz is called. Explicit calls to bcHqz (with bias correction) or
to Hqz.ProbaVector (without correction) are possible to avoid ambiguity. The .integer and
.numeric methods accept Ps or Ns arguments instead of NorP for backward compatibility.
The size of a metacommunity (see MetaCommunity) is unknown so it has to be set according to a
rule which does not ensure that its abundances are integer values. Then, classical bias-correction
methods do not apply. Providing the SampleCoverage argument allows applying the "ChaoShen"
correction to estimate quite well the entropy. DivPart and GammaEntropy functions use this tweak.
Value
A named number equal to the calculated entropy. The name is that of the bias correction used.
References
<NAME>. and <NAME>. (2012). Measuring diversity: the importance of species similarity.
Ecology 93(3): 477-489.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
<NAME>. and <NAME>. (2006). Towards a unifying approach to diversity measures: Bridging
the gap between the Shannon entropy and Rao’s quadratic index. Theoretical Population Biology
70(3): 237-243.
See Also
Dqz, PhyloEntropy
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Prepare the similarity matrix
DistanceMatrix <- as.matrix(EightSpTree$Wdist^2/2)
# Similarity can be 1 minus normalized distances between species
Z <- 1 - DistanceMatrix/max(DistanceMatrix)
# Calculate diversity of order 2
Ps <- EightSpAbundance/sum(EightSpAbundance)
Hqz(Ps, 2, Z)
# Equal to normalized Rao quadratic entropy when q=2
Rao(Ps, EightSpTree)/max(DistanceMatrix)
# But different from PhyloEntropy for all other q, e.g. 1
Hqz(Ps, 1, Z)
summary(PhyloEntropy(Ps, 1, EightSpTree))
HqzBeta Similarity-based beta entropy of a community
Description
Calculates the similarity-based beta entropy of order q of a community belonging to a metacommu-
nity.
Usage
HqzBeta(NorP, NorPexp = NULL, q = 1, Z = diag(length(NorP)), ...)
bcHqzBeta(Ns, Nexp = NULL, q = 1, Z = diag(length(Ns)), Correction = "Best",
CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
HqzBeta(NorP, NorPexp = NULL, q = 1, Z = diag(length(NorP)),
..., CheckArguments = TRUE, Ps = NULL, Pexp = NULL)
## S3 method for class 'AbdVector'
HqzBeta(NorP, NorPexp = NULL, q = 1, Z = diag(length(NorP)), Correction = "Best",
..., CheckArguments = TRUE, Ns = NULL, Nexp = NULL)
## S3 method for class 'integer'
HqzBeta(NorP, NorPexp = NULL, q = 1, Z = diag(length(NorP)), Correction = "Best",
..., CheckArguments = TRUE, Ns = NULL, Nexp = NULL)
## S3 method for class 'numeric'
HqzBeta(NorP, NorPexp = NULL, q = 1, Z = diag(length(NorP)), Correction = "Best",
..., CheckArguments = TRUE, Ps = NULL, Ns = NULL, Pexp = NULL, Nexp = NULL)
Arguments
Ps The probability vector of species of the community.
Pexp The probability vector of species of the metacommunity.
Ns A numeric vector containing species abundances of the community.
Nexp A numeric vector containing species abundances of the metacommunity.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities of the
community.
NorPexp A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities of the
metacommunity.
q A number, the order of diversity. Default is 1.
Z A relatedness matrix, i.e. a square matrix whose terms are all positive, strictly
positive on the diagonal. Generally, the matrix is a similarity matrix, i.e. the
diagonal terms equal 1 and other terms are between 0 and 1. Default is the
identity matrix to calculate neutral entropy.
Correction A string containing one of the possible corrections: currently, no correction is
available so "Best", the default value, is equivalent to "None".
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The derivation of similarity-based beta entropy can be found in Marcon et al. (2014).
Bias correction requires the number of individuals.
Note that beta entropy value is related to alpha entropy (if q is not 1) and cannot be compared
accross communities (Jost, 2007). Beta entropy of a community is not meaningful in general, do
rather calculate the BetaDiversity of the metacommunity.
The functions are designed to be used as simply as possible. HqzBeta is a generic method. If its first
argument is an abundance vector, an integer vector or a numeric vector which does not sum to 1,
the bias corrected function bcHqzBeta is called. Explicit calls to bcHqzBeta (with bias correction)
or to HqzBeta.ProbaVector (without correction) are possible to avoid ambiguity. The .integer
and .numeric methods accept Ps or Ns arguments instead of NorP for backward compatibility.
Value
A named number equal to the calculated entropy. The name is that of the bias correction used.
References
Jost (2007), Partitioning diversity into independent alpha and beta components. Ecology 88(10):
2427-2439.
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ps)
# Probability distribution of the first plot
Ps1 <- as.ProbaVector(Paracou618.MC$Psi[, 1])
# Prepare the similarity matrix
DistanceMatrix <- as.matrix(Paracou618.dist)
# Similarity can be 1 minus normalized distances between species
Z <- 1 - DistanceMatrix/max(DistanceMatrix)
# Divergence of order 2 between plot 1 and the whole forest
HqzBeta(Ps1, Ps, q=2, Z)
Hurlbert Hurlbert’s Index and Explicit Diversity
Description
Calculates the Hurlbert entropy of order k of a probability or abundance vector, and its effective
number of species.
Usage
Hurlbert(NorP, k = 2, ...)
bcHurlbert(Ns, k = 2, CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
Hurlbert(NorP, k = 2, ...,
CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
Hurlbert(NorP, k = 2, ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
Hurlbert(NorP, k = 2, ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
Hurlbert(NorP, k = 2, ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL)
HurlbertD(NorP, k = 2, ...)
bcHurlbertD(Ns, k = 2, CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
HurlbertD(NorP, k = 2, ...,
CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
HurlbertD(NorP, k = 2, ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
HurlbertD(NorP, k = 2, ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
HurlbertD(NorP, k = 2, ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
k A number: the order of diversity. Default is 2 for Simpson’s diversity.
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Hurlbert’s index of diversity (1971) of order k is the expected number of species in a sample of size
k.
Bias correction requires the number of individuals. Use bcHurlbert. It is limited to orders k less
than or equal to the number of individuals in the community.
The effective number of species HurlbertD (explicit diversity) has been derived by Dauby & Hardy
(2012). It is calculated numerically. bcHurlbertD calculates it from the bias-corrected index
bcHurlbert.
The functions are designed to be used as simply as possible. Hurlbert is a generic method. If
its first argument is an abundance vector, an integer vector or a numeric vector which does not
sum to 1, the bias corrected function bcHurlbert is called. Explicit calls to bcHurlbert (with
bias correction) or to Hurlbert.ProbaVector (without correction) are possible to avoid ambiguity.
The .integer and .numeric methods accept Ps or Ns arguments instead of NorP for backward
compatibility.
Value
A named number equal to the calculated index or diversity. The name is either "Biased" or "Unbi-
ased", depending on the estimator used.
References
<NAME>. & <NAME>. (2012) Sampled-based estimation of diversity sensu stricto by transforming
Hurlbert diversities into effective number of species. Ecography 35(7): 661-672.
Hurlbert (1971) The Nonconcept of Species Diversity: A Critique and Alternative Parameters. Ecol-
ogy 52(4): 577-586.
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Species probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ns)
# Whittaker plot
plot(Ns)
# Calculate Hurlbert entropy of order 2, equal to Simpson's index of diversity
Hurlbert(Ps, 2)
# Calculate an unbiased estimator of Hurlbert entropy of order 2
Hurlbert(Ns, 2)
KLq Generalized Kullback-Leibler divergence
Description
Calculates the generalized Kullback-Leibler divergence between an observed and an expected prob-
ability distribution.
Usage
KLq(Ps, Pexp, q = 1, CheckArguments = TRUE)
Arguments
Ps The observed probability vector.
Pexp The expected probability vector.
q A number: the order of entropy. Default is 1.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The generalized Kullback-Leibler divergence (Borland et al., 1998) converges to the Kullback-
Leibler divergence (Kullback and Leibler, 1951) when q tends to 1. It is used to calculate the
generalized beta entropy (Marcon et al., 2014).
Value
A number equal to the generalized Kullback-Leibler divergence between the probability distribu-
tions.
References
<NAME>., <NAME>. and <NAME>. (1998). Information gain within nonextensive thermo-
statistics. Journal of Mathematical Physics 39(12): 6490-6501.
<NAME>. and <NAME>. (1951). On Information and Sufficiency. The Annals of Mathemat-
ical Statistics 22(1): 79-86.
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
See Also
TsallisBeta
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- Paracou618.MC$Ps
# Probability distribution of the first plot
Ps1 <- Paracou618.MC$Psi[, 1]
# Divergence of order 2 between the first plot and the whole forest
KLq(Ps1, Ps, 2)
lnq Logarithm of order q
Description
Calculates the deformed logarithm of order q.
Usage
lnq(x, q)
lnq.CommunityProfile(Profile)
Arguments
x A numeric vector.
Profile A CommunityProfile.
q A number.
Details
The deformed logarithm is defined as lnq x = (1−q) .
The shape of the deformed logarithm is similar to that of the regular one. ln1 x = log x.
1
For q > 1, lnq (+∞) = (q−1) .
lnq.CommunityProfile calculates the deformed logarithm of a CommunityProfile. Its $x item
(the order of diversity) is kept unchanged whilst other items are set to their logarithm of order $x.
Thus, a diversity profile is transformed into an entropy profile.
Value
A vector of the same length as x containing the transformed values or a CommunityProfile.
References
<NAME>. (1994). What are the numbers that experiments provide? Quimica Nova 17(6): 468-471.
See Also
expq
Examples
curve(log(x), 0, 1, lty=1)
curve(lnq(x, 2), 0, 1, lty=2, add=TRUE)
curve(lnq(x, 3), 0, 1, lty=3, add=TRUE)
legend("topleft", legend = c("log(x)", "ln2(x)", "ln3(x)"), lty = c(1, 2, 3), inset=0.02)
MC Utilities Manipulation of meta-communities
Description
Tools to manipulate meta-communities. From a list of meta-communities, MergeMC creates a meta-
community whose communities are each original metacommunity. MergeC creates a metacommu-
nity whose communities are each original community. ShuffleMC randomly assigns original com-
munities to a metacommunity, keeping original weights, and returns a list of meta-communities.
Usage
MergeMC(MClist, Weights = rep(1, length(MClist)), CheckArguments = TRUE)
MergeC(MClist, Weights = rep(1, length(MClist)), CheckArguments = TRUE)
ShuffleMC(MClist, Weights = rep(1, length(MClist)), CheckArguments = TRUE)
Arguments
MClist A list of MetaCommunity objects.
Weights A vector of numbers containing the weight of each metacommunity of the list.
It does not have to be normalized to sum to 1.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
MergeMC is used for hierarchical partitioning of diversity. The gamma diversity of communities of
the list becomes alpha diversity of the merged meta-community.
MergeC creates a new meta-community by mixing original ones. Original communities are kept,
their new weight is the product of their original weight and the weight of their original meta-
community.
ShuffleMC is used for simulations of the null hypothesis that all metacommunities of the list are
identical.
Value
MergeMC and MergeC return a MetaCommunity.
ShuffleMC returns a list of MetaCommunity objects.
See Also
MetaCommunity
Examples
# First meta-community
(df <- data.frame(C1 = c(10, 10, 10, 10), C2 = c(0, 20, 35, 5),
C3 = c(25, 15, 0, 2), row.names = c("sp1", "sp2", "sp3", "sp4")))
w <- c(1, 2, 1)
MC1 <- MetaCommunity(Abundances = df, Weights = w)
# Second meta-community
(df <- data.frame(C1 = c(10, 4), C2 = c(3, 4), row.names = c("sp1", "sp5")))
w <- c(3, 2)
MC2 <- MetaCommunity(Abundances = df, Weights = w)
# Merge communities
plot(MergeC(list(MC1, MC2)), main="Merged communities")
# Merge metacommunities
plot(MergeMC(list(MC1, MC2)), main="Merged meta-communities")
smc <- ShuffleMC(list(MC1, MC2))
plot(MergeMC(smc), main="Shuffled, then Merged meta-communities")
MCdiversity Meta-Community diversity class.
Description
Methods for objects of type "MCdiversity".
Usage
is.MCdiversity(x)
## S3 method for class 'MCdiversity'
plot(x, ...)
## S3 method for class 'MCdiversity'
autoplot(object, col = ggplot2::GeomCol$default_aes$fill,
border = ggplot2::GeomCol$default_aes$colour, ...)
## S3 method for class 'MCdiversity'
summary(object, ...)
Arguments
x An object to be tested or plotted.
object A MCdiversity object to be summarized or plotted.
col The color used to fill the bars. See "Color Specification" in par.
border The color of the borders around the bars. See hist.
... Additional arguments to be passed to the generic methods.
Value
Meta-community diversity objects are lists containing:
MetaCommunity The name of the MetaCommunity object containing inventory data.
Type The type of diversity ("alpha", "beta" or "gamma").
Order The order of diversity q.
Correction The estimation bias correction used to calculate diversity.
Tree The phylogenetic or functional tree used to calculate phylodiversity.
Normalized Logical. Indicates whether phylodiversity is normalized or proportional to the
height of the tree.
Weights A vector containing the weights of communities.
Communities A vector containing the diversity of communities.
Total The total diversity.
is.MCdiversity returns TRUE if the object is of class MCdiversity.
summary.MCdiversity returns a summary of the object’s value.
MCentropy Meta-Community entropy class.
Description
Methods for objects of type "MCentropy".
Usage
is.MCentropy(x)
## S3 method for class 'MCentropy'
plot(x, ...)
## S3 method for class 'MCentropy'
autoplot(object, col = ggplot2::GeomCol$default_aes$fill,
border = ggplot2::GeomCol$default_aes$colour, ...)
## S3 method for class 'MCentropy'
summary(object, ...)
Arguments
x An object to be tested or plotted.
object A MCentropy object to be summarized or plotted.
col The color used to fill the bars. See "Color Specification" in par.
border The color of the borders around the bars. See hist.
... Additional arguments to be passed to the generic methods.
Value
Meta-community entropy objects are lists containing:
MetaCommunity The name of the MetaCommunity object containing inventory data.
Method The method used to calculate entropy ("HCDT", "Similarity-based").
Type The type of entropy ("alpha", "beta" or "gamma").
Order The order of entropy q.
Correction The estimation bias correction used to calculate entropy.
Tree The phylogenetic or functional tree used to calculate phyloentropy.
Normalized Logical. Indicates whether phyloentropy is normalized or proportional to the
height of the tree.
Z The matrix used to calculate similarity-based entropy.
Weights A vector containing the weights of communities.
Communities A vector containing the entropy of communities.
Total The total entropy.
is.MCentropy returns TRUE if the object is of class MCentropy.
summary.MCentropy returns a summary of the object’s value.
MetaCommunity Metacommunity class
Description
Methods for objects of type "MetaCommunity".
Usage
MetaCommunity(Abundances, Weights = rep(1, ncol(Abundances)))
is.MetaCommunity(x)
## S3 method for class 'MetaCommunity'
summary(object, ...)
## S3 method for class 'MetaCommunity'
plot(x, ...)
Arguments
Abundances A dataframe containing the number of observations (lines are species, columns
are communities). The first column of the dataframe may contain the species
names.
Weights A vector of positive numbers equal to community weights or a dataframe con-
taining a vector named Weights. It does not have to be normalized. Weights are
equal by default.
x An object to be tested or plotted.
object A MetaCommunity object to be summarized.
... Additional arguments to be passed to the generic methods.
Details
In the entropart package, individuals of different "species" are counted in several "communities"
which are agregated to define a "metacommunity".
This is a naming convention, which may correspond to plots in a forest inventory or any data orga-
nized the same way.
Alpha and beta entropies of communities are summed according to Weights and the probability to
find a species in the metacommunity is the weighted average of probabilities in communities.
The simplest way to import data is to organize it into two text files. The first file should contain
abundance data: the first column named Species for species names, and a column for each com-
munity.
The second file should contain the community weights in two columns. The first one, named
Communities should contain their names and the second one, named Weights, their weights.
Files can be read and data imported by code such as:
Abundances <- read.csv(file="Abundances.csv", row.names = 1)
Weights <- read.csv(file="Weights.csv")
MC <- MetaCommunity(Abundances, Weights)
Value
An object of class MetaCommunity is a list:
Nsi A matrix containing abundance data, species in line, communities in column.
Ns A vector containing the number of individuals of each species.
Ni A vector containing the number of individuals of each community.
N The total number of individuals.
Psi A matrix whose columns are the probability vectors of communities (each of
them sums to 1).
Wi A vector containing the normalized community weights (sum to 1).
Ps A vector containing the probability vector of the metacommunity.
Nspecies The number of species.
Ncommunities The number of communities.
SampleCoverage The sample coverage of the metacommunity.
SampleCoverage.communities
A vector containing the sample coverages of each community.
is.MetaCommunity returns TRUE if the object is of class MetaCommunity.
summary.MetaCommunity returns a summary of the object’s value.
plot.MetaCommunity plots it.
Examples
# Use BCI data from vegan package
if (require(vegan, quietly = TRUE)) {
# Load BCI data (number of trees per species in each 1-ha plot of a tropical forest)
data(BCI)
# BCI dataframe must be transposed (its lines are plots, not species)
BCI.df <- as.data.frame(t(BCI))
# Create a metacommunity object from a matrix of abundances and a vector of weights
# (here, all plots have a weight equal to 1)
MC <- MetaCommunity(BCI.df)
}
Optimal.Similarity Optimal scale parameter to transform a distance matrix into a simi-
larity matrix
Description
Calculates the scale parameter u that maximizes the variance of the similarity matrix exp(−u ∗
DistanceM atrix).
Usage
Optimal.Similarity(Distance, CheckArguments = TRUE)
Arguments
Distance A distance matrix, i.e. a square matrix with zeros on its diagonal or a dist
object.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The similarity matrix used by Dqz) can be optimized following Marcon et al. (2014) such that the
variance of similarities between pairs of species is maximized. See the example.
Value
A list:
u The optimal scale u.
Matrix The optimal similarity matrix Z.
References
<NAME>., <NAME>. and <NAME>. (2014). The decomposition of similarity-based diversity and
its bias correction. HAL hal-00989454(version 3).
See Also
Dqz
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Prepare the similarity matrix. The square root of Paracou618.dist is euclidean.
optimal <- Optimal.Similarity(sqrt(Paracou618.dist))
# Optimal scale
optimal$u
# Calculate diversity of order 2
bcDqz(Paracou618.MC$Ns, 2, optimal$Matrix)
Paracou618.dist Functional distances between pairs of species of Paracou field station
plots 6 and 18, two 1-ha plots inventoried by the Bridge project.
Description
This dataset is from Paracou field station, French Guiana, managed by Cirad. Traits are detailed in
Marcon and Herault (2014), the distance matrix was built following Paine et al. (2011).
Usage
data(Paracou618)
Format
An object of class dist.
Source
Permanent data census of Paracou.
References
Gourlet-Fleury, S., <NAME>. and <NAME>., Eds. (2004). Ecology & management of
a neotropical rainforest. Lessons drawn from Paracou, a long-term experimental research site in
French Guiana. Paris, Elsevier.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>., <NAME>., and <NAME>. (2011). Functional traits of individual
trees reveal ecological constraints on community assembly in tropical rain forests. Oikos, 120(5),
720-727.
Examples
data(Paracou618)
plot(density(Paracou618.dist, from=0), main="Distances between species")
Paracou618.Functional Functional tree of species of Paracou field station plots 6 and 18, two
1-ha plots inventoried by the Bridge project.
Description
This dataset is from Paracou field station, French Guiana, managed by Cirad. Traits are de-
tailed in Marcon and Herault (2014), the tree was built following Paine et al. (2011), based on
Paracou618.dist.
Usage
data(Paracou618)
Format
An object of class hclust.
Source
Permanent data census of Paracou.
References
<NAME>., <NAME>. and <NAME>., Eds. (2004). Ecology & management of
a neotropical rainforest. Lessons drawn from Paracou, a long-term experimental research site in
French Guiana. Paris, Elsevier.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>., <NAME>., and <NAME>. (2011). Functional traits of individual
trees reveal ecological constraints on community assembly in tropical rain forests. Oikos, 120(5),
720-727.
Examples
data(Paracou618)
plot(Paracou618.Functional)
Paracou618.MC Paracou field station plots 6 and 18, two 1-ha plots inventoried by the
Bridge project.
Description
This dataset is from Paracou field station, French Guiana, managed by Cirad.
Usage
data(Paracou618)
Format
An object of class MetaCommunity made of two communities and 425 species.
Source
Permanent data census of Paracou and Marcon et al. (2012).
References
<NAME>., <NAME>. and <NAME>., Eds. (2004). Ecology & management of
a neotropical rainforest. Lessons drawn from Paracou, a long-term experimental research site in
French Guiana. Paris, Elsevier.
<NAME>., <NAME>, et al. (2012). Characterizing the relative spatial structure of point patterns.
International Journal of Ecology 2012(Article ID 619281): 11.
Examples
data(Paracou618)
summary(Paracou618.MC)
Paracou618.Taxonomy Taxonomy (Family - Genus - Species) of Paracou field station plots 6
and 18, two 1-ha plots inventoried by the Bridge project.
Description
This dataset is from Paracou field station, French Guiana, managed by Cirad.
Usage
data(Paracou618)
Format
An object of class phylo containing a taxonomy.
Source
Permanent data census of Paracou.
References
<NAME>., <NAME>. and <NAME>., Eds. (2004). Ecology & management of
a neotropical rainforest. Lessons drawn from Paracou, a long-term experimental research site in
French Guiana. Paris, Elsevier.
Examples
data(Paracou618)
plot(Paracou618.Taxonomy, type="fan", show.tip.label=FALSE)
PDFD Phylogenetic Diversity / Functional Diversity of a Community
Description
Calculates Faith’s PD / Petchey and Gaston’ FD of a community described by a probability vector
and a phylogenetic / functional tree.
Usage
PDFD(Ps, Tree, CheckArguments = TRUE)
Arguments
Ps A probability vector, summing to 1.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
PD and FD are defined as the total legth of the branches of the tree.
The probability vector is used to select branches: branches with probability 0 are eliminated.
Bias correction requires the number of individuals to estimate sample Coverage.
Use bcPhyloDiversity(Ps, 0, Tree) and choose the Correction.
Value
A named number equal to the calculated diversity. The name is that of the bias correction used.
References
<NAME>. (1992). Conservation evaluation and phylogenetic diversity. Biological Conservation
61(1): 1-10.
<NAME>. and <NAME>. (2002). Functional diversity (FD), species richness and community
composition. Ecology Letters 5: 402-411.
See Also
bcPhyloDiversity
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest
# and their taxonomy)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- Paracou618.MC$Ps
# Calculate the phylogenetic Shannon diversity of the plot
PDFD(Ps, Paracou618.Taxonomy)
PhyloApply Apply a Function over a Phylogenetic Tree
Description
Cuts the tree into slices separated by nodes, applies the function to each slice and returns the
weighted (by slice lengths) sum of the results.
Usage
PhyloApply(Tree, FUN, NorP, Normalize = TRUE, dfArgs = NULL,
..., CheckArguments = TRUE)
Arguments
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
FUN The function to be applied to each interval of the tree.
NorP A numeric vector or a two-column matrix. Contains either abundances or proba-
bilities. Two-column matrices should contain the observed abundances (or prob-
abilities) in the first column and the expected ones in the second column, to allow
using beta diversity functions.
Normalize If TRUE (default), the Total value returned by Function is normalized by the
height of the tree (it is the weighted average value of the result in each slice).
If FALSE, it is the unnormalized weighted sum of the results.
dfArgs A dataframe. Columns are arguments for FUN: their names are those of valid
arguments. Values will be passed to FUN in each slice of the tree, starting from
the tips. The number of lines must equal the number of slices.
... Further arguments to pass to Function.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
This funtion is generally not used directly. It is a tool to calculate PhyloEntropy and PhyloDiversity.
Intervals (slices) separate two cuts (nodes) in a tree: no node is found at heights contained in an
interval.
Objects of class PPtree are returned by Preprocess.Tree.
. . . allow passing arguments to the function but they can’t change along the tree. If necessary,
dfArgs allow passing a different value for each slice of the tree.
Value
An object of class PhyloValue. It is a list:
Distribution The distribution used to calculate the value
Function The function used to calculate the value
Tree The functional or phylogenetic tree used to calculate the value
Normalized Logical. Indicates whether phylovalue is normalized or proportional to the
height of the tree.
Cuts A named vector containing values along the tree. Names are cut ends, i.e. the
ends of intervals (the first interval starts at 0 for leaves, the max value is the
height of the tree).
Corrections A named vector containing the correction used by FUN to obtain each value of
Cuts. Names are those of Cuts.
Total The total value, multiplied by the tree height if Normalize is FALSE.
References
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
See Also
Preprocess.Tree
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest
# and their taxonomy)
data(Paracou618)
# Plot the taxonomy
plot(Paracou618.Taxonomy, type="fan", show.tip.label=FALSE)
# Calculate the mean number of trees (individuals) per species
# (Cuts are 1=species, 2=genus, 3=family)
summary(PhyloApply(Paracou618.Taxonomy, mean, Paracou618.MC$Ns, TRUE))
PhyloBetaEntropy Phylogenetic Beta Entropy of a community
Description
Calculates the phylogenetic beta entropy of order q of a a community belonging to a metacommu-
nity.
Usage
PhyloBetaEntropy(NorP, NorPexp = NULL, q = 1, Tree, Normalize = TRUE, ...)
bcPhyloBetaEntropy(Ns, Nexp, q = 1, Tree, Normalize = TRUE,
Correction = "Best", CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
PhyloBetaEntropy(NorP, NorPexp = NULL, q = 1, Tree, Normalize = TRUE,
..., CheckArguments = TRUE, Ps = NULL, Pexp = NULL)
## S3 method for class 'AbdVector'
PhyloBetaEntropy(NorP, NorPexp = NULL, q = 1, Tree, Normalize = TRUE,
Correction = "Best", ..., CheckArguments = TRUE, Ns = NULL, Nexp = NULL)
## S3 method for class 'integer'
PhyloBetaEntropy(NorP, NorPexp = NULL, q = 1, Tree, Normalize = TRUE,
Correction = "Best", ..., CheckArguments = TRUE, Ns = NULL, Nexp = NULL)
## S3 method for class 'numeric'
PhyloBetaEntropy(NorP, NorPexp = NULL, q = 1, Tree, Normalize = TRUE,
Correction = "Best", ..., CheckArguments = TRUE, Ps = NULL, Ns = NULL,
Pexp = NULL, Nexp = NULL)
Arguments
Ps The probability vector of species of the community.
Pexp The probability vector of species of the metacommunity.
Ns A numeric vector containing species abundances of the community.
Nexp A numeric vector containing species abundances of the metacommunity.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities of the
community.
NorPexp A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities of the
metacommunity.
q A number: the order of entropy. Default is 1.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), the entropy returned by the function is normalized by the
height of the tree (it is the weighted average value of the entropy in each slice).
If FALSE, it is the unnormalized weighted sum of the results.
Correction A string containing one of the possible corrections: currently, only "ChaoShen".
"Best" is the default value, it is equivalent to "ChaoShen".
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The phylogenetic entropy is the generalization of HCDT entropy to unequal species distances
(Pavoine et al., 2009).
Calculation relies on TsallisBeta and PhyloApply.
Bias correction requires the number of individuals to estimate sample Coverage. Use bcPhyloBetaEntropy
and choose the Correction.
Note that beta entropy value is related to alpha entropy (if q is not 1) and cannot be compared
accross communities (Jost, 2007). Beta entropy of a community is not meaningful in general, do
rather calculate the PhyloDiversity of the metacommunity.
The functions are designed to be used as simply as possible. PhyloBetaEntropy is a generic
method. If its first argument is an abundance vector, an integer vector or a numeric vector which
does not sum to 1, the bias corrected function bcPhyloBetaEntropy is called. Explicit calls to
bcPhyloBetaEntropy (with bias correction) or to PhyloBetaEntropy.ProbaVector (without cor-
rection) are possible to avoid ambiguity. The .integer and .numeric methods accept Ps or Ns
arguments instead of NorP for backward compatibility.
Value
A PhyloEntropy object containing entropy values at each cut of the tree.
References
Jost (2007), Partitioning diversity into independent alpha and beta components. Ecology 88(10):
2427-2439.
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>. and <NAME>. (2009). Hierarchical partitioning of evolutionary and
ecological patterns in the organization of phylogenetically-structured species assemblages: Ap-
plication to rockfish (genus: Sebastes) in the Southern California Bight. Ecology Letters 12(9):
898-908.
See Also
TsallisBeta, bcPhyloBetaEntropy, PhyloDiversity
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest
# and their taxonomy)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ps)
# Probability distribution of the first plot
Ps1 <- as.ProbaVector(Paracou618.MC$Psi[, 1])
# Calculate the phylogenetic Shannon beta entropy of the plot
summary(PhyloBetaEntropy(Ps1, Ps, 1, Paracou618.Taxonomy) -> e)
plot(e)
# Ns is the vector of abundances of the metacommunity
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Abundances in the first plot
Ns1 <- as.AbdVector(Paracou618.MC$Nsi[, 1])
# Calculate the phylogenetic Shannon beta entropy of the plot
summary(bcPhyloBetaEntropy(Ns1, Ns, 1, Paracou618.Taxonomy, Correction = "Best") -> e)
plot(e)
PhyloDiversity Phylogenetic Diversity of a Community
Description
Calculates the phylogenetic diversity of order q of a probability vector.
Usage
PhyloDiversity(NorP, q = 1, Tree, Normalize = TRUE, ...)
bcPhyloDiversity(Ns, q = 1, Tree, Normalize = TRUE, Correction = "Best",
CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
PhyloDiversity(NorP, q = 1, Tree, Normalize = TRUE,
..., CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
PhyloDiversity(NorP, q = 1, Tree, Normalize = TRUE,
Correction = "Best", ..., CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
PhyloDiversity(NorP, q = 1, Tree, Normalize = TRUE,
Correction = "Best", ..., CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
PhyloDiversity(NorP, q = 1, Tree, Normalize = TRUE,
Correction = "Best", ..., CheckArguments = TRUE, Ps = NULL, Ns = NULL)
is.PhyloDiversity(x)
## S3 method for class 'PhyloDiversity'
summary(object, ...)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
q A number: the order of diversity. Default is 1.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), the Total diversity is not affected by the height of the tree.
If FALSE, it is proportional to the height of the tree.
Correction A string containing one of the possible corrections: "None" (no correction),
"ChaoShen", "Grassberger", "Holste", "Bonachela" or "Best", the default
value.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
x An object to be tested or plotted
object A PhyloDiversity object to be summarized.
... Additional arguments to be passed to the generic methods.
Details
The phylogenetic entropy is its generalization of HCDT entropy to unequal species distances (Pavoine
et al., 2009).
Diversity is obtained by transforming generalized entropy.
Bias correction requires the number of individuals to estimate sample Coverage. Use bcPhyloDiversity
and choose the Correction.
The functions are designed to be used as simply as possible. PhyloDiversity is a generic method.
If its first argument is an abundance vector, an integer vector or a numeric vector which does not sum
to 1, the bias corrected function bcPhyloDiversity is called. Explicit calls to bcPhyloDiversity
(with bias correction) or to PhyloDiversity.ProbaVector (without correction) are possible to
avoid ambiguity. The .integer and .numeric methods accept Ps or Ns arguments instead of NorP
for backward compatibility.
Value
An object of class PhyloDiversity is a list:
Distribution The distribution used to calculate diversity
Function The function used to calculate diversity
Tree The functional or phylogenetic tree used to calculate diversity
Normalized Logical. Indicates whether phylodiversity is normalized or proportional to the
height of the tree.
Type The type of diversity ("alpha", "beta" or "gamma").
Order The order of diversity q.
Cuts A named vector containing values of neutral diversity along the tree. Names are
cut ends, i.e. the ends of intervals (the first interval starts at 0 for leaves, the max
value is the height of the tree).
Total A value equal the total diversity (obtained by transforming the total normalized
entropy), multiplied by the tree height if Normalize is FALSE.
is.PhyloDiversity returns TRUE if the object is of class PhyloDiversity.
summary.PhyloDiversity returns a summary of the object’s value.
PhyloDiversity objects can be plotted by plot.PhyloValue because PhyloDiversity objects
are also of class PhyloValue.
Note
The tree must contain all species of the probability vector. If it contains extra species, computation
time will just be increased.
References
<NAME>., <NAME>. and <NAME>. (2010). Phylogenetic diversity measures based on Hill numbers.
Philosophical Transactions of the Royal Society B 365(1558): 3599-609.
Marcon, E., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>. and <NAME>. (2009). Hierarchical partitioning of evolutionary and
ecological patterns in the organization of phylogenetically-structured species assemblages: Ap-
plication to rockfish (genus: Sebastes) in the Southern California Bight. Ecology Letters 12(9):
898-908.
See Also
PhyloEntropy, Diversity
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest
# and their taxonomy)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ps)
# Calculate the phylogenetic Shannon diversity of the plot
summary(PhyloDiversity(Ps, 1, Paracou618.Taxonomy) -> d)
plot(d)
# Ns is the vector of abundances of the metacommunity
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Calculate the phylogenetic Shannon diversity of the plot
summary(bcPhyloDiversity(Ns, 1, Paracou618.Taxonomy, Correction = "Best") -> d)
plot(d)
PhyloEntropy Phylogenetic Entropy of a community
Description
Calculates the phylogenetic entropy of order q of a probability vector.
Usage
PhyloEntropy(NorP, q = 1, Tree, Normalize = TRUE, ...)
bcPhyloEntropy(Ns, q = 1, Tree, Normalize = TRUE, Correction = "Best",
SampleCoverage = NULL, CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
PhyloEntropy(NorP, q = 1, Tree, Normalize = TRUE,
..., CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
PhyloEntropy(NorP, q = 1, Tree, Normalize = TRUE, Correction = "Best",
..., CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
PhyloEntropy(NorP, q = 1, Tree, Normalize = TRUE, Correction = "Best",
..., CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
PhyloEntropy(NorP, q = 1, Tree, Normalize = TRUE, Correction = "Best",
..., CheckArguments = TRUE, Ps = NULL, Ns = NULL)
is.PhyloEntropy(x)
## S3 method for class 'PhyloEntropy'
summary(object, ...)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
q A number: the order of entropy. Default is 1.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Normalize If TRUE (default), the Total entropy returned by the function is normalized by
the height of the tree (it is the weighted average value of the entropy in each
slice).
If FALSE, it is the unnormalized weighted sum of the results.
Correction A string containing one of the possible corrections supported by Tsallis.
SampleCoverage The sample coverage of Ns calculated elsewhere. Used to calculate the gamma
diversity of meta-communities, see details.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
x An object to be tested or plotted
object A PhyloEntropy object to be summarized.
... Additional arguments to be passed to the generic methods.
Details
The phylogenetic entropy is its generalization of HCDT entropy to unequal species distances (Pavoine
et al., 2009).
Calculation relies on Tsallis and PhyloApply.
Intervals separate two cuts in a tree: no node is found at heights contained in an interval.
Bias correction requires the number of individuals to estimate sample Coverage. Use bcPhyloEntropy
and choose the Correction.
The functions are designed to be used as simply as possible. PhyloEntropy is a generic method.
If its first argument is an abundance vector, an integer vector or a numeric vector which does not
sum to 1, the bias corrected function bcPhyloEntropy is called. Explicit calls to bcPhyloEntropy
(with bias correction) or to PhyloEntropy.ProbaVector (without correction) are possible to avoid
ambiguity. The .integer and .numeric methods accept Ps or Ns arguments instead of NorP for
backward compatibility.
The size of a metacommunity (see MetaCommunity) is unknown so it has to be set according to a
rule which does not ensure that its abundances are integer values. Then, classical bias-correction
methods do not apply. Providing the SampleCoverage argument allows applying the "ChaoShen"
and "Grassberger" corrections to estimate quite well the entropy. DivPart and GammaEntropy
functions use this tweak.
Value
An object of class PhyloEntropy is a list:
Distribution The distribution used to calculate entropy
Function The function used to calculate entropy
Tree The functional or phylogenetic tree used to calculate entropy
Normalized Logical. Indicates whether phyloentropy is normalized or proportional to the
height of the tree.
Type The type of entropy ("alpha", "beta" or "gamma").
Order The order of entropy q.
Cuts A named vector containing values of neutral entropy along the tree. Names are
cut ends, i.e. the ends of intervals (the first interval starts at 0 for leaves, the max
value is the height of the tree).
Total A value equal the total entropy multiplied by the tree height if Normalize is
FALSE.
is.PhyloEntropy returns TRUE if the object is of class PhyloEntropy.
summary.PhyloEntropy returns a summary of the object’s value.
PhyloEntropy objects can be plotted by plot.PhyloValue because PhyloEntropy objects are also
of class PhyloValue.
Note
The tree must contain all species of the probability vector. If it contains extra species, computation
time will just be increased.
References
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>., <NAME>. and <NAME>. (2009). Hierarchical partitioning of evolutionary and
ecological patterns in the organization of phylogenetically-structured species assemblages: Ap-
plication to rockfish (genus: Sebastes) in the Southern California Bight. Ecology Letters 12(9):
898-908.
See Also
Tsallis, PhyloDiversity
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest
# and their taxonomy)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ps)
# Calculate the phylogenetic Shannon entropy of the plot
summary(PhyloEntropy(Ps, 1, Paracou618.Taxonomy) -> e)
plot(e)
# Ns is the vector of abundances of the metacommunity
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Calculate the phylogenetic Shannon entropy of the plot
summary(bcPhyloEntropy(Ns, 1, Paracou618.Taxonomy, Correction = "Best") -> e)
plot(e)
PhyloValue Phylogenetic entropy or diversity.
Description
Entropy or diversity against the height of the phylogenetic or functional tree.
Usage
is.PhyloValue(x)
## S3 method for class 'PhyloValue'
autoplot(object, xlab = expression(italic("T")), ylab = NULL, main = NULL,
col = ggplot2::GeomLine$default_aes$colour,
lty = ggplot2::GeomLine$default_aes$linetype,
lwd = ggplot2::GeomLine$default_aes$size,
...)
## S3 method for class 'PhyloValue'
plot(x, xlab = expression(italic("T")), ylab = NULL, main = NULL, ...)
## S3 method for class 'PhyloValue'
summary(object, ...)
Arguments
x An object of class PhyloValue, including PhyloDiversity and PhyloEntropy
objects.
xlab The X axis label, "T" by default for Time.
ylab The Y axis label. if NULL (by default), "Entropy" or "Diversity" or nothing is
chosen according to the object class.
main The main title of the plot. if NULL (by default), a default value is used.
object A PhyloValue object to be summarized.
col The color of the geom objects. See "Color Specification" in par.
lty The type of the lines. See lines.
lwd The width of the lines. See lines.
... Additional arguments to be passed to plot.
Details
PhyloValue objects are the result of PhyloApply.
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest
# and their taxonomy)
data(Paracou618)
# Calculate richness along the tree
# (Cuts are 1=species, 2=genus, 3=family)
summary(r <- PhyloApply(Paracou618.Taxonomy, FUN=Richness,
NorP=Paracou618.MC$Ns, Normalize=TRUE))
autoplot(r)
PPtree Preprocessed Trees.
Description
Methods for objects of type "PPtree".
Usage
is.PPtree(x)
## S3 method for class 'PPtree'
plot(x, ...)
Arguments
x An object to be tested or plotted
... Additional arguments to be passed to the generic methods.
Value
An object of class PPtree is a list:
phyTree A phylo tree
hTree A hclust tree
Height The height of the tree, that is to say the distance between root and leaves
Cuts A vector. Cut times of the tree (the distance from nodes to leaves)
Intervals A vector. The lengths of intervals between cuts
is.PPtree returns TRUE if the object is of class PPtree.
plot.PPtree plots it.
Note
Versions up to 1.3 contained a phylog tree, now deprecated in ade4. A phylo tree is now used.
See the dedicated vignette (vignette("Phylogenies", package="entropart")) for more de-
tails.
Examples
data(Paracou618)
# Preprocess a phylog object
ppt <- Preprocess.Tree(EightSpTree)
# Is it a preprocessed tree?
is.PPtree(ppt)
# Plot it
plot(ppt, hang=-1)
# Alternative plot
ade4::radial.phylog(EightSpTree)
RAC Fit Distributions to Well-Known Rank Abundance Curves.
Description
Observed distributions are fitted to classical RAC’s.
Usage
RAClnorm(Ns, CheckArguments = TRUE)
RACgeom(Ns, CheckArguments = TRUE)
RAClseries(Ns, CheckArguments = TRUE)
RACbstick(Ns, CheckArguments = TRUE)
Arguments
Ns A numeric vector containing species abundances.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
SpeciesDistribution or integer vectors can be used to fit classical rank-abundance curves (RAC)
of classical distributions: "RAClnorm" for log-normal (Preston, 1948), "RAClseries" for log-series
(Fisher et al., 1943), "RACgeom" for geometric (Motomura, 1932) or "RACbstick" for broken stick
(MacArthur, 1957). method returns the estimated parameters of the fitted distribution. The broken
stick has no parameter, so the maximum abundance is returned.
Value
A list (the parameters of distributions are returned only if the distribution has been fit):
Rank A numeric vector. The ranks of species in the fitted RAC.
Abundance The abundance of species in the fitted RAC.
mu The expectation of the log-normal distribution
sigma The standard deviation of the log-normal distribution
alpha Fisher’s alpha in the log-series distribution
prob The proportion of ressources taken by successive species in the geometric dis-
tribution
max The maximum abundance in the broken-stick distribution
Note
Fisher’s alpha is estimated to fit the log-series distribution. The estimation is done by the fisher.alpha
function of package vegan. It may differ substantially from the estimation returned by optimal.theta
from package untb.
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. (1943) The Relation Between the Number of Species
and the Number of Individuals in a Random Sample of an Animal Population. Journal of Animal
Ecology 12: 42-58.
<NAME>. (1957) On the Relative Abundance of Bird Species. PNAS 43(3): 293-295.
Motomura I. (1932) On the statistical treatment of communities. Zoological Magazine 44: 379-383.
<NAME>. (1948). The commonness, and rarity, of species. Ecology 29(3): 254-283.
See Also
rgeom, rlnorm, rCommunity, plot.SpeciesDistribution
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Fitted parameters
RACln <- RAClnorm(Ns)
RACln$mu
RACln$sigma
RACgeom(Ns)$prob
RAClseries(Ns)$alpha
RACbstick(Ns)$max
Rao Rao Quadratic Entropy of a Community
Description
Calculates Rao’s quadratic entropy of a community described by a probability vector and a phylo-
genetic / functional tree.
Usage
Rao(NorP, Tree, ...)
bcRao(Ns, Tree, Correction="Lande", CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
Rao(NorP, Tree, ..., CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
Rao(NorP, Tree, Correction = "Lande", ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
Rao(NorP, Tree, Correction = "Lande", ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
Rao(NorP, Tree, Correction = "Lande", ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
Tree An object of class hclust, phylo, phylog or PPtree. The tree must be ultra-
metric.
Correction A string containing one of the possible corrections accepted by bcTsallis or
"Lande", the default value (equivalent to "Best").
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Bias correction requires the number of individuals. Use bcRao and choose the Correction.
The unbiased estimator of Rao’s entropy is identical to that of Simpson’s entropy because Rao’s
entropy is a linear sum of Simson entropies, all of them calculated from the same number of indi-
viduals (Marcon and Herault, 2014). It equals the plug-in etimator multiplied by n/(n-1) where n is
the total number of individuals.
The functions are designed to be used as simply as possible. Tsallis is a generic method. If its first
argument is an abundance vector, an integer vector or a numeric vector which does not sum to 1,
the bias corrected function bcTsallis is called. Explicit calls to bcTsallis (with bias correction)
or to Tsallis.ProbaVector (without correction) are possible to avoid ambiguity. The .integer
and .numeric methods accept Ps or Ns arguments instead of NorP for backward compatibility.
Value
A named number equal to the calculated entropy. The name is that of the bias correction used.
References
<NAME>., <NAME>. (2015). Decomposing Phylodiversity. Methods in Ecology and Evolution
6(3): 333-339.
<NAME>. (1982). Diversity and dissimilarity coefficients: a unified approach. Theoretical Popula-
tion Biology 21: 24-43.
See Also
bcPhyloDiversity
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Species probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ns)
# Calculate Rao's quadratic entropy of the plot
Rao(Ps, Paracou618.Taxonomy)
rCommunity Random Communities
Description
Draws random communities according to a probability distribution.
Usage
rCommunity(n, size = sum(NorP), NorP = 1, BootstrapMethod = "Chao2015", S = 300,
Distribution = "lnorm", sd = 1, prob = 0.1, alpha = 40,
CheckArguments = TRUE)
Arguments
n The number of communities to draw.
size The number of individuals to draw in each community.
BootstrapMethod
The method used to obtain the probabilities to generate bootstrapped commu-
nities from observed abundances. If "Marcon", the probabilities are simply the
abundances divided by the total number of individuals (Marcon et al., 2012). If
"Chao2013" or "Chao2015" (by default), a more sophisticated approach is used
(see as.ProbaVector) following Chao et al. (2013) or Chao et al. (2015).
NorP A numeric vector. Contains either abundances or probabilities.
S The number of species.
Distribution The distribution of species frequencies. May be "lnorm" (log-normal), "lseries"
(log-series), "geom" (geometric) or "bstick" (broken stick).
sd The simulated distribution standard deviation. For the log-normal distribution,
this is the standard deviation on the log scale.
prob The proportion of ressources taken by successive species.
alpha Fisher’s alpha.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Communities of fixed size are drawn in a multinomial distribution according to the distribution of
probabilities provided by NorP.
An abundance vector may be used instead of probabilities, then size is by default the total num-
ber of individuals in the vector. Random communities are built by drawing in a multinomial law
following Marcon et al. (2012), or trying to estimate the distribution of the actual community
with as.ProbaVector. If BootstrapMethod = "Chao2013", the distribution is estimated by a sin-
gle parameter model and unobserved species are given equal probabilities. If BootstrapMethod =
"Chao2015", a two-parameter model is used and unobserved species follow a geometric distribu-
tion.
Alternatively, the probabilities may be drawn following a classical distribution: either a lognormal
("lnorm") one (Preston, 1948) with given standard deviation (sd; note that the mean is actually
a normalizing constant. Its values is set equal to 0 for the simulation of the normal distribution
of unnormalized log-abundances), a log-series ("lseries") one (Fisher et al., 1943) with param-
eter alpha, a geometric ("geom") one (Motomura, 1932) with parameter prob, or a broken stick
("bstick") one (MacArthur, 1957). The number of simulated species is fixed by S, except for
"lseries" where it is obtained from alpha and size: S = α ln(1 + size α ).
Log-normal, log-series and broken-stick distributions are stochastic. The geometric distribution is
completely determined by its parameters.
Value
A vector of species abundances (AbdVector) if a single community has been drawn, or a MetaCommunity
containing simulated communities.
References
<NAME>., <NAME>. and <NAME>. (2013). Entropy and the species accumulation curve: a novel
entropy estimator via discovery rates of new species. Methods in Ecology and Evolution 4(11):
1091-1100.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2015) Unveiling the Species-
Rank Abundance Distribution by Generalizing Good-Turing Sample Coverage Theory. Ecology
96(5): 1189-1201.
<NAME>., <NAME>., <NAME>. (1943) The Relation Between the Number of Species
and the Number of Individuals in a Random Sample of an Animal Population. Journal of Animal
Ecology 12: 42-58.
MacArth<NAME>. (1957) On the Relative Abundance of Bird Species. PNAS 43(3): 293-295.
<NAME>., <NAME>., <NAME>. and <NAME>. (2012). The Decomposition of Shannon’s
Entropy and a Confidence Interval for Beta Diversity. Oikos 121(4): 516-522.
Motomura I. (1932) On the statistical treatment of communities. Zoological Magazine 44: 379-383.
Preston, F.W. (1948). The commonness, and rarity, of species. Ecology 29(3): 254-283.
<NAME>., <NAME>., <NAME>. (2013) Program SimAssem: Software for simulating
species assemblages and estimating species richness. Methods in Ecology and Evolution 4: 891-
896.
See Also
SpeciesDistribution and the program SimAssem (Reese et al., 2013; not an R package) for more
distributions.
Examples
# Generate communities made of 100000 individuals among 300 species and fit them
par(mfrow = c(2,2))
for (d in c("lnorm", "lseries", "geom", "bstick")) {
rCommunity(n = 1, size = 1E5, S = 300, Distribution = d) -> AbdVec
plot(AbdVec, Distribution = d, main = d)
}
Richness Number of species of a community
Description
Calculates the number of species from probability vector. The name is that of the estimator (the
bias correction) used.
Usage
Richness(NorP, ...)
bcRichness(Ns, Correction = "Best", Alpha = 0.05, JackOver = FALSE, JackMax = 10,
CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
Richness(NorP, ..., CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
Richness(NorP, Correction = "Best", Alpha = 0.05,
JackOver = FALSE, JackMax = 10,
Level = NULL, PCorrection = "Chao2015", Unveiling = "geom", RCorrection = "Rarefy",
..., CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
Richness(NorP, Correction = "Best", Alpha = 0.05,
JackOver = FALSE, JackMax = 10,
Level = NULL, PCorrection = "Chao2015", Unveiling = "geom", RCorrection = "Rarefy",
..., CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
Richness(NorP, Correction = "Best", Alpha = 0.05,
JackOver = FALSE, JackMax = 10,
Level = NULL, PCorrection = "Chao2015", Unveiling = "geom", RCorrection = "Rarefy",
..., CheckArguments = TRUE, Ps = NULL, Ns = NULL)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
Correction A string containing one of the possible corrections: "None" (no correction),
"Jackknife", "iChao1", or "Chao1". "Best", the default value, is currently
"Jackknife". Ignored by richness interpolation, and by extrapolation if PCorrection
is not "None".
Alpha The risk level, 5% by default, used to optimize the jackknife order.
JackOver If TRUE, retain the jackknife order immediately superior to the optimal one, usu-
ally resulting in the overestimation of the number of species. Default is FALSE.
JackMax The highest jackknife order allowed. Default is 10. Allowed values are between
1 and 10.
Level The level of interpolation or extrapolation. It may be an a chosen sample size (an
integer) or a sample coverage (a number between 0 and 1). Richness extrapola-
tion require its asymptotic estimation depending on the choice of Correction.
PCorrection A string containing one of the possible corrections to estimate a probability
distribution in as.ProbaVector: "Chao2015" is the default value. If "None",
the asymptotic distribution is not estimated and extrapolation relies only on the
asymptotic estimator of richness. Used only for extrapolation.
Unveiling A string containing one of the possible unveiling methods to estimate the proba-
bilities of the unobserved species in as.ProbaVector: "geom" (the unobserved
species distribution is geometric) is the default value. Used only for extrapola-
tion.
RCorrection A string containing a correction recognized by Richness to evaluate the total
number of species in as.ProbaVector. "Rarefy" is the default value to esti-
mate the number of species such that the entropy of the asymptotic distribution
rarefied to the observed sample size equals the observed entropy of the data.
Used only for extrapolation.
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Bias correction requires the number of individuals. Use bcRichness and choose the Correction.
Chao correction techniques are from Chao (1984) and Chiu et al. (2015). The Jackknife estimator
is calculated by a straight adaptation of the code by <NAME> (jackknife in CRAN-archived
package SPECIES). The optimal order is selected according to Burnham and Overton (1978; 1979).
The argument JackOver allows selecting one order over the optimal one. Many other estimators
are available elsewhere, the ones implemented here are necessary for other entropy estimations.
The functions are designed to be used as simply as possible. Richness is a generic method. If its
first argument is an abundance vector, an integer vector or a numeric vector which does not sum to
1, the bias corrected function bcRichness is called.
Richness can be estimated at a specified level of interpolation or extrapolation, either a chosen
sample size or sample coverage (Chao et al., 2014), rather than its asymptotic value. Extrapolation
relies on the estimation of the asymptotic richness. If PCorrection is "None", then the asymptotic
estimation of richness is made using the chosen Correction, else the asymtpotic distribution of
the community is derived and its estimated richness adjusted so that the entropy of a sample of this
distribution of the size of the actual sample has the entropy of the actual sample.
Value
A named number equal to the estimated number of species. The name is the Correction, except
for "SAC" (Species Accumulation Curve) for interpolation.
References
<NAME>., and <NAME>. (1978), Estimation of the Size of a Closed Population When
Capture Probabilities Vary Among Animals. Biometrika, 65: 625-633.
<NAME>., and <NAME>. (1979), Robust Estimation of Population Size When Capture
Probabilities Vary Among Animals. Ecology 60:927-936.
<NAME>. (1984) Nonparametric estimation of the number of classes in a population. Scandinavian
Journal of Statistics 11: 265-270.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>
(2014). Rarefaction and extrapolation with Hill numbers: A framework for sampling and estimation
in species diversity studies. Ecological Monographs, 84(1): 45-67.
<NAME>., <NAME>., <NAME>., <NAME>. (2014) An Improved Nonparametric Lower
Bound of Species Richness via a Modified Good-Turing Frequency Formula. Biometrics 70(3):
671-682.
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Species probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ns)
# Whittaker plot
plot(Ns)
# Number of observed species
Richness(Ps)
# Estimate the actual number of species
bcRichness(Ns, Correction = "Chao1")
bcRichness(Ns, Correction = "iChao1")
bcRichness(Ns, Correction = "Jackknife")
bcRichness(Ns, Correction = "Jackknife", JackOver=TRUE)
Shannon Shannon entropy of a community
Description
Calculates the Shannon entropy of a probability vector.
Usage
Shannon(NorP, ...)
bcShannon(Ns, Correction = "Best", CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
Shannon(NorP, ..., CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
Shannon(NorP, Correction = "Best", Level = NULL,
PCorrection = "Chao2015", Unveiling = "geom", RCorrection = "Rarefy", ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
Shannon(NorP, Correction = "Best", Level = NULL,
PCorrection = "Chao2015", Unveiling = "geom", RCorrection = "Rarefy", ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
Shannon(NorP, Correction = "Best", Level = NULL,
PCorrection = "Chao2015", Unveiling = "geom", RCorrection = "Rarefy", ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
Correction A string containing one of the possible asymptotic estimators: "None" (no cor-
rection), "ChaoShen", "GenCov", "Grassberger", "Grassberger2003", "Schurmann",
"Holste", "Bonachela", "Miller", "ZhangHz", "ChaoJost", "Marcon", "UnveilC",
"UnveiliC", "UnveilJ" or "Best", the default value. Currently, "Best" is
"UnveilJ".
Level The level of interpolation or extrapolation. It may be an a chosen sample size
(an integer) or a sample coverage (a number between 0 and 1). Entropy extrapo-
lation require its asymptotic estimation depending on the choice of Correction.
Entropy interpolation relies on the estimation of Abundance Frequence Counts:
then, Correction is passed to AbdFreqCount as its Estimator argument.
PCorrection A string containing one of the possible corrections to estimate a probability dis-
tribution in as.ProbaVector: "Chao2015" is the default value. Used only for
extrapolation.
Unveiling A string containing one of the possible unveiling methods to estimate the proba-
bilities of the unobserved species in as.ProbaVector: "geom" (the unobserved
species distribution is geometric) is the default value. If "None", the asymptotic
distribution is not unveiled and only the asymptotic estimator is used. Used only
for extrapolation.
RCorrection A string containing a correction recognized by Richness to evaluate the total
number of species in as.ProbaVector. "Rarefy" is the default value to esti-
mate the number of species such that the entropy of the asymptotic distribution
rarefied to the observed sample size equals the observed entropy of the data.
Used only for extrapolation.
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Bias correction requires the number of individuals to estimate sample Coverage.
Correction techniques are from Miller (1955), Chao and Shen (2003), Grassberger (1988), Grass-
berger (2003), Schurmann (2003), Holste et al. (1998), Bonachela et al. (2008), Zhang (2012),
Chao, Wang and Jost (2013). More estimators can be found in the entropy package.
Using MetaCommunity mutual information, Chao, Wang and Jost (2013) calculate reduced-bias
Shannon beta entropy (see the last example below) with better results than the Chao and Shen
estimator, but community weights cannot be arbitrary: they must be proportional to the number of
individuals.
The functions are designed to be used as simply as possible. Shannon is a generic method. If its
first argument is an abundance vector, an integer vector or a numeric vector which does not sum to
1, the bias corrected function bcShannon is called.
Entropy can be estimated at a specified level of interpolation or extrapolation, either a chosen sample
size or sample coverage (Chao et al., 2014), rather than its asymptotic value. Extrapolation relies on
the estimation of the asymptotic entropy. If Unveiling is "None", then the asymptotic estimation
of entropy is made using the chosen Correction, else the asymtpotic distribution of the community
is derived and its estimated richness adjusted so that the entropy of a sample of this distribution of
the size of the actual sample has the entropy of the actual sample.
Value
A named number equal to the calculated entropy. The name is that of the bias correction used.
References
<NAME>., <NAME>. and <NAME>. (2008). Entropy estimates of small data sets.
Journal of Physics A: Mathematical and Theoretical 41(202001): 1-9.
<NAME>. and <NAME>. (2003). Nonparametric estimation of Shannon’s index of diversity when
there are unseen species in sample. Environmental and Ecological Statistics 10(4): 429-443.
<NAME>., <NAME>. and <NAME>. (2013). Entropy and the species accumulation curve: a novel en-
tropy estimator via discovery rates of new species. Methods in Ecology and Evolution 4(11):1091-
1100.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>
(2014). Rarefaction and extrapolation with Hill numbers: A framework for sampling and estimation
in species diversity studies. Ecological Monographs, 84(1): 45-67.
Grassberger, P. (1988). Finite sample corrections to entropy and dimension estimates. Physics
Letters A 128(6-7): 369-373.
<NAME>. (2003). Entropy Estimates from Insufficient Samplings. ArXiv Physics e-prints
0307138.
<NAME>., <NAME>. and <NAME>. (1998). Bayes’ estimators of generalized entropies. Journal of
Physics A: Mathematical and General 31(11): 2551-2566.
<NAME>. (1955) Note on the bias of information estimates. In: Quastler, H., editor. Information
Theory in Psychology: Problems and Methods: 95-100.
<NAME>. (1948). A Mathematical Theory of Communication. The Bell System Technical
Journal 27: 379-423, 623-656.
<NAME>. (2004). Bias analysis in entropy estimation. Journal of Physics A: Mathematical
and Theoretical 37(27): L295-L301.
<NAME>. (1988). Possible generalization of Boltzmann-Gibbs statistics. Journal of Statistical
Physics 52(1): 479-487.
<NAME>. (2012). Entropy Estimation in Turing’s Perspective. Neural Computation 24(5): 1368-
1389.
See Also
bcShannon, Tsallis
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Species probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ns)
# Whittaker plot
plot(Ns)
# Calculate Shannon entropy
Shannon(Ps)
# Calculate the best estimator of Shannon entropy
Shannon(Ns)
# Use metacommunity data to calculate reduced-bias Shannon beta as mutual information
(bcShannon(Paracou618.MC$Ns) + bcShannon(colSums(Paracou618.MC$Nsi))
- bcShannon(Paracou618.MC$Nsi))
ShannonBeta Shannon beta entropy of a community
Description
Calculates the Shannon beta entropy of a community belonging to a metacommunity.
Usage
ShannonBeta(NorP, NorPexp = NULL, ...)
bcShannonBeta(Ns, Nexp, Correction = "Best", CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
ShannonBeta(NorP, NorPexp = NULL, ...,
CheckArguments = TRUE, Ps = NULL, Pexp = NULL)
## S3 method for class 'AbdVector'
ShannonBeta(NorP, NorPexp = NULL, Correction = "Best", ...,
CheckArguments = TRUE, Ns = NULL, Nexp = NULL)
## S3 method for class 'integer'
ShannonBeta(NorP, NorPexp = NULL, Correction = "Best", ...,
CheckArguments = TRUE, Ns = NULL, Nexp = NULL)
## S3 method for class 'numeric'
ShannonBeta(NorP, NorPexp = NULL, Correction = "Best", ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL, Pexp = NULL, Nexp = NULL)
Arguments
Ps The probability vector of species of the community.
Pexp The probability vector of species of the metacommunity.
Ns A numeric vector containing species abundances of the community.
Nexp A numeric vector containing species abundances of the metacommunity.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities of the
community.
NorPexp A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities of the
metacommunity.
Correction A string containing one of the possible corrections: currently, "ChaoShen" (Mar-
con et al., 2012) equivalent to "Best", and "ZhangGrabchak" (Zhang and Grabchak,
2014).
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The derivation of Shannon beta entropy can be found in Marcon et al. (2012).
Bias correction requires the number of individuals to estimate sample Coverage. Use bcShannonBeta
and choose the Correction.
The functions are designed to be used as simply as possible. ShannonBeta is a generic method.
If its first argument is an abundance vector, an integer vector or a numeric vector which does not
sum to 1, the bias corrected function bcShannonBeta is called. Explicit calls to bcShannonBeta
(with bias correction) or to ShannonBeta.ProbaVector (without correction) are possible to avoid
ambiguity. The .integer and .numeric methods accept Ps or Ns arguments instead of NorP for
backward compatibility.
Value
A number equal to the calculated entropy.
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2012). The Decomposition of Shannon’s
Entropy and a Confidence Interval for Beta Diversity. Oikos 121(4): 516-522.
<NAME>. and <NAME>. (2014). Nonparametric Estimation of Kullback-Leibler Divergence.
Neural computation 26(11): 2570-2593.
See Also
bcShannonBeta
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ps)
# Probability distribution of the first plot
Ps1 <- as.ProbaVector(Paracou618.MC$Psi[, 1])
# Shannon beta entropy of the plot
ShannonBeta(Ps1, Ps)
# Ns is the vector of abundances of the metacommunity
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Abundances in the first plot
Ns1 <- as.AbdVector(Paracou618.MC$Nsi[, 1])
# Reduced-bias estimator of Shannon beta entropy of the plot
bcShannonBeta(Ns1, Ns)
Simpson Simpson entropy of a community
Description
Calculates the Simpson entropy of a probability vector.
Usage
Simpson(NorP, ...)
bcSimpson(Ns, Correction = "Best", CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
Simpson(NorP, ..., CheckArguments = TRUE,
Ps = NULL)
## S3 method for class 'AbdVector'
Simpson(NorP, Correction="Best", Level = NULL, ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
Simpson(NorP, Correction="Best", Level = NULL, ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
Simpson(NorP, Correction="Best", Level = NULL, ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
Correction A string containing one of the possible corrections accepted by bcTsallis or
"Lande". "Best", the default value, is currently "Jackknife". Ignored by
interpolation and extrapolation.
Level The level of interpolation or extrapolation. It may be an a chosen sample size
(an integer) or a sample coverage (a number between 0 and 1).
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Lande’s correction has been derived (Lande, 1996; Good, 1953) especially for Simpson entropy,
while other corrections are for generalized Tsallis entropy. It is identical to the unbiased estimator
proposed by Simpson, although arguments were different. It equals the plug-in etimator multiplied
by n/(n-1) where n is the total number of individuals.
Bias correction requires the number of individuals to estimate sample Coverage.
The functions are designed to be used as simply as possible. Simpson is a generic method. If its
first argument is an abundance vector, an integer vector or a numeric vector which does not sum to
1, the bias corrected function bcSimpson is called.
Entropy can be estimated at a specified level of interpolation or extrapolation, either a chosen sample
size or sample coverage (Chao et al., 2014), rather than its asymptotic value. Simpson’s extrapolated
entropy estimator does not rely on the estimation of the asymptotic distribution.
Value
A named number equal to the calculated entropy. The name is that of the bias correction used.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>
(2014). Rarefaction and extrapolation with Hill numbers: A framework for sampling and estimation
in species diversity studies. Ecological Monographs, 84(1): 45-67.
<NAME>. (1953). On the Population Frequency of Species and the Estimation of Population
Parameters. Biometrika 40(3/4): 237-264.
<NAME>. (1996). Statistics and partitioning of species diversity, and similarity among multiple
communities. Oikos 76: 5-13.
<NAME>. (1949). Measurement of diversity. Nature 163(4148): 688.
See Also
Tsallis, bcSimpson
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Whittaker plot
plot(Ns)
# Calculate an unbiased estimator of Simpson's index of diversity
Simpson(Ns)
SimpsonBeta Simpson beta entropy of a community
Description
Calculates the Simpson beta entropy of a community belonging to a metacommunity.
Usage
SimpsonBeta(NorP, NorPexp = NULL, ...)
bcSimpsonBeta(Ns, Nexp, Correction = "Best", CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
SimpsonBeta(NorP, NorPexp = NULL, ...,
CheckArguments = TRUE, Ps = NULL, Pexp = NULL)
## S3 method for class 'AbdVector'
SimpsonBeta(NorP, NorPexp = NULL, Correction = "Best", ...,
CheckArguments = TRUE, Ns = NULL, Nexp = NULL)
## S3 method for class 'integer'
SimpsonBeta(NorP, NorPexp = NULL, Correction = "Best", ...,
CheckArguments = TRUE, Ns = NULL, Nexp = NULL)
## S3 method for class 'numeric'
SimpsonBeta(NorP, NorPexp = NULL, Correction = "Best", ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL, Pexp = NULL, Nexp = NULL)
Arguments
Ps The probability vector of species of the community.
Pexp The probability vector of species of the metacommunity.
Ns A numeric vector containing species abundances of the community.
Nexp A numeric vector containing species abundances of the metacommunity.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities of the
community.
NorPexp A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities of the
metacommunity.
Correction A string containing one of the possible corrections: currently, only "ChaoShen",
identical to "Best".
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The derivation of Tsallis beta entropy (Simpson is Tsallis of order 2) can be found in Marcon et al.
(2014).
Bias correction requires the number of individuals to estimate sample Coverage. Use bcSimpsonBeta
and choose the Correction.
Note that Simpson beta entropy value is related to Simpson alpha entropy value and cannot be
compared accross communities (Jost, 2007). Beta entropy of a community is not meaningful in
general, do rather calculate the BetaDiversity of order 2 of the metacommunity.
The functions are designed to be used as simply as possible. SimpsonBeta is a generic method.
If its first argument is an abundance vector, an integer vector or a numeric vector which does not
sum to 1, the bias corrected function bcSimpsonBeta is called. Explicit calls to bcSimpsonBeta
(with bias correction) or to SimpsonBeta.ProbaVector (without correction) are possible to avoid
ambiguity. The .integer and .numeric methods accept Ps or Ns arguments instead of NorP for
backward compatibility.
Value
A named number equal to the calculated entropy. The name is that of the bias correction used.
References
Jost (2007), Partitioning diversity into independent alpha and beta components. Ecology 88(10):
2427-2439.
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
See Also
Simpson, bcSimpsonBeta, BetaDiversity
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ps)
# Probability distribution of the first plot
Ps1 <- as.ProbaVector(Paracou618.MC$Psi[, 1])
# Simpson beta entropy of the plot
SimpsonBeta(Ps1, Ps)
# Transform into diversity
expq(SimpsonBeta(Ps1, Ps)/(1-Simpson(Ps1)), 2)
# Ns is the vector of abundances of the metacommunity
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Abundances in the first plot
Ns1 <- as.AbdVector(Paracou618.MC$Nsi[, 1])
# Reduced-bias Shannon beta entropy of the plot
bcSimpsonBeta(Ns1, Ns)
SimTest SimTest class
Description
Methods for objects of type "SimTest", used to test a value against its distribution under a simulated
null hypothesis.
Usage
as.SimTest(RealValue, SimulatedValues)
is.SimTest(x)
## S3 method for class 'SimTest'
autoplot(object, Quantiles = c(0.025, 0.975), ...,
colValue = "red", colQuantiles = "black", ltyQuantiles = 2,
main = NULL, xlab = "Simulated Values", ylab = "Density")
## S3 method for class 'SimTest'
plot(x, Quantiles = c(0.025, 0.975), ...,
colValue = "red", lwdValue = 2, ltyValue = 2,
colQuantiles = "black", lwdQuantiles = 1, ltyQuantiles = 2,
main = NULL, xlab = "Simulated Values", ylab = "Density")
## S3 method for class 'SimTest'
summary(object, Quantiles = c(0.025, 0.975), ...)
Arguments
x An object to be tested or plotted.
object An object.
RealValue A numeric Value (the actual one).
SimulatedValues
A numeric vector containing the simulated values.
Quantiles A vector containing the quantiles of interest.
colValue The color of the line representing the real value on the plot.
lwdValue The width of the line representing the real value on the plot.
ltyValue The line type of the line representing the real value on the plot.
colQuantiles The color of the lines representing the quantiles on the plot.
lwdQuantiles The width of the lines representing the quantiles on the plot.
ltyQuantiles The line type of the lines representing the quantiles on the plot.
main The main title of the plot. if NULL (by default), there is no title.
xlab The X axis label.
ylab The Y axis label.
... Additional arguments to be passed to the generic methods.
Details
Simulated values should be obtained by simulation. The actual value is compared to simulated
quantiles. SimTest objects can be plotted and summarized.
Value
SimTest objects are lists containing:
RealValue The value to test.
SimulatedValues
A vector of simulated values, whose quantiles will be used for the test.
is.SimTest returns TRUE if the object is of class SimTest.
summary.SimTest returns a summary of the object, including the empirical quantile of the real
value in the simulated distributon.
Examples
# Set the value to test
Real <- 0.8
# Is it a realization of a Gaussian distribution?
Sims <- rnorm(1000)
# Make a Simtest object
st <- as.SimTest(Real, Sims)
summary(st)
# Plot
plot(st)
# ggplot
autoplot(st)
SpeciesDistribution Species Distributions
Description
A Species Distribution is a (preferably named) vector containing species abundances or probabili-
ties.
Usage
as.SpeciesDistribution(x, ...)
## S3 method for class 'data.frame'
as.SpeciesDistribution(x, ...)
## S3 method for class 'integer'
as.SpeciesDistribution(x, ...)
## S3 method for class 'numeric'
as.SpeciesDistribution(x, ...)
## S3 method for class 'SpeciesDistribution'
autoplot(object, ..., Distribution = NULL,
ylog = TRUE, main = NULL, xlab = "Rank", ylab = NULL,
pch = ggplot2::GeomPoint$default_aes$shape,
col = ggplot2::GeomPoint$default_aes$colour,
cex = ggplot2::GeomPoint$default_aes$size)
## S3 method for class 'SpeciesDistribution'
plot(x, ..., Distribution = NULL,
type = "b", log = "y", main = NULL, xlab = "Rank", ylab = NULL)
is.SpeciesDistribution(x)
as.ProbaVector(x, ...)
## S3 method for class 'data.frame'
as.ProbaVector(x, ...)
## S3 method for class 'integer'
as.ProbaVector(x, Correction = "None", Unveiling = "None",
RCorrection = "Jackknife", JackOver = FALSE, JackMax = 10,
CEstimator = "ZhangHuang", q = 0, ..., CheckArguments = TRUE)
## S3 method for class 'numeric'
as.ProbaVector(x, Correction = "None", Unveiling = "None",
RCorrection = "Jackknife", JackOver = FALSE, JackMax = 10,
CEstimator = "ZhangHuang", q = 0, ..., CheckArguments = TRUE)
is.ProbaVector(x)
as.AbdVector(x, ...)
## S3 method for class 'data.frame'
as.AbdVector(x, Round = TRUE, ...)
## S3 method for class 'integer'
as.AbdVector(x, ...)
## S3 method for class 'numeric'
as.AbdVector(x, Round = TRUE, ...)
is.AbdVector(x)
Arguments
x An object.
object An object.
Distribution The distribution to fit on the plot. May be "lnorm" (log-normal), "lseries"
(log-series), "geom" (geometric) or "bstick" (broken stick). If NULL, no distri-
bution is fitted. See rCommunity for the description of these distributions.
Round If TRUE (by default), values of x are set to integer to create an AbdVector.
This is useful if original abundances are not integers (this is often the case for
MetaCommunity abundances which are the product of probabilities by the num-
ber of individuals) and integer values are required (for example to calculate the
bootstrap confidence interval of a community profile).
Correction A string containing one of the possible corrections to estimate a probability dis-
tribution: "None" (no correction, the default value), or "Chao2013", "Chao2015",
"ChaoShen" to estimate the probability of the observed species in the asymptotic
distribution.
Unveiling A string containing one of the possible unveiling methods to estimate the proba-
bilities of the unobserved species: "None" (default, no species is added), "unif"
(uniform: all unobserved species have the same probability) or "geom" (geomet-
ric: the unobserved species distribution is geometric).
RCorrection A string containing a correction recognized by Richness to evaluate the to-
tal number of species. "Jackknife" is the default value. An alternative is
"Rarefy" to estimate the number of species such that the entropy of order q
of the asymptotic distribution rarefied to the observed sample size equals the
actual entropy of the data.
JackOver If TRUE, retain the jackknife order immediately superior to the optimal one, usu-
ally resulting in the overestimation of the number of species. Default is FALSE.
Ignored if RCorrection is not "Jackknife".
JackMax The highest jackknife order allowed. Default is 10. Allowed values are between
1 and 10.
CEstimator A string containing an estimator recognized by Coverage to evaluate the sample
coverage. "ZhangHuang" is the default value.
q A number: the order of entropy. Default is 0 for richness. Used only to esti-
mate asymptotic probability distributions with RCorrection equal to "Rarefy".
Then, the number of unobserved species is fitted so that the entropy of order q
of the asymptotic probability distribution at the observed sample size equals the
actual entropy of the data.
type The plot type, see plot.
log The axis to plot in log scale, e.g. "xy" for both axes. Default is "y".
main The main title of the plot. if NULL (by default), there is no title.
xlab The X axis label, "Rank" by default.
ylab The Y axis label. if NULL (by default), "Probability" or "Abundance" is chosen
according to the object class.
ylog Logical; if TRUE (by default), the Y-axis of the plot is log scaled.
pch The plotting characters. See points.
col The color of the geom objects. See "Color Specification" in par.
cex The character expansion (size) of the points. See points.
... Additional arguments to be passed to plot. Unused elsewhere.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
SpeciesDistribution objects include AbdVector and ProbaVector objects.
as.AbdVector just sets the class of the numeric or integer x so that appropriate versions of com-
munity functions (generic methods such as Diversity) are applied. Abundance values are rounded
(by default) to the nearest integer.
as.ProbaVector normalizes the vector so that it sums to 1. If Correction is not "None", the
observed abundance distribution is used to estimate the actual species distribution. The list of
species will be changed: zero-abundance species will be cleared, and some unobserved species will
be added. First, observed species probabilities are estimated folllowing Chao and Shen (2003),
i.e. input probabilities are multiplied by the sample coverage, or according to more sophisticated
models: Chao et al. (2013, single-parameter model), or Chao et al. (2015, two-parameter model).
The total probability of observed species equals the sample coverage. Then, the distribution of
unobserved species can be unveiled: their number is estimated according to RCorrection (if the
Jackknife estimator is chosen, the JackOver argument allows using the order immediately over
the optimal one). The coverage deficit (1 minus the sample coverage) is shared by the unobserved
species equally (Unveiling = "unif", Chao et al., 2013) or according to a geometric distribution
(Unveiling = "geom", Chao et al., 2015).
These functions can be applied to data frames to calculate the joint diversity (Gregorius, 2010).
SpeciesDistribution objects can be plotted. The plot method returns the estimated parameters
of the fitted distribution. The broken stick has no parameter, so the maximum abundance is returned.
Note
Fisher’s alpha (Fisher et al., 1943) is estimated to fit the log-series distribution. The estimation
is done by the fisher.alpha function of package vegan. It may differ substantially from the
estimation returned by optimal.theta from package untb.
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
References
Chao, A. and <NAME>. (2003). Nonparametric estimation of Shannon’s index of diversity when
there are unseen species in sample. Environmental and Ecological Statistics 10(4): 429-443.
Chao, A., <NAME>. and <NAME>. (2013). Entropy and the species accumulation curve: a novel en-
tropy estimator via discovery rates of new species. Methods in Ecology and Evolution 4(11):1091-
1100.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2015) Unveiling the Species-
Rank Abundance Distribution by Generalizing Good-Turing Sample Coverage Theory. Ecology
96(5): 1189-1201.
<NAME>., <NAME>., <NAME>. (1943) The Relation Between the Number of Species
and the Number of Individuals in a Random Sample of an Animal Population. Journal of Animal
Ecology 12: 42-58.
<NAME>. (2010) Linking Diversity and Differentiation. Diversity 2(3): 370-394.
See Also
rgeom, rlnorm, rCommunity, RAClnorm
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Whittaker plot, poorly fitted by a log-normal distribution
plot(Ns, Distribution = "lnorm")
# ggplot version
autoplot(Ns, Distribution = "lnorm")
Tsallis Tsallis (HCDT) Entropy of a community
Description
Calculates the HCDT, also known as Tsallis entropy of order q of a probability vector.
Usage
Tsallis(NorP, q = 1, ...)
bcTsallis(Ns, q = 1, Correction = "Best", SampleCoverage = NULL,
CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
Tsallis(NorP, q = 1, ...,
CheckArguments = TRUE, Ps = NULL)
## S3 method for class 'AbdVector'
Tsallis(NorP, q = 1, Correction = "Best", Level = NULL,
PCorrection="Chao2015", Unveiling="geom", RCorrection="Rarefy", ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'integer'
Tsallis(NorP, q = 1, Correction = "Best", Level = NULL,
PCorrection="Chao2015", Unveiling="geom", RCorrection="Rarefy", ...,
CheckArguments = TRUE, Ns = NULL)
## S3 method for class 'numeric'
Tsallis(NorP, q = 1, Correction = "Best", Level = NULL,
PCorrection="Chao2015", Unveiling="geom", RCorrection="Rarefy", ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL)
Arguments
Ps A probability vector, summing to 1.
Ns A numeric vector containing species abundances.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities.
q A number: the order of entropy. Some corrections allow only a positive number.
Default is 1 for Shannon entropy.
Correction A string containing one of the possible asymptotic estimators: "None" (no cor-
rection), "ChaoShen", "GenCov", "Grassberger", "Holste", "Bonachela",
"ZhangGrabchak", or "ChaoJost", "Marcon", "UnveilC", "UnveiliC", "UnveilJ"
or "Best", the default value. Currently, "Best" is "UnveilJ".
Level The level of interpolation or extrapolation. It may be an a chosen sample size
(an integer) or a sample coverage (a number between 0 and 1).
PCorrection A string containing one of the possible corrections to estimate a probability dis-
tribution in as.ProbaVector: "Chao2015" is the default value. Used only for
extrapolation.
Unveiling A string containing one of the possible unveiling methods to estimate the proba-
bilities of the unobserved species in as.ProbaVector: "geom" (the unobserved
species distribution is geometric) is the default value. Used only for extrapola-
tion.
RCorrection A string containing a correction recognized by Richness to evaluate the total
number of species in as.ProbaVector. "Rarefy" is the default value to esti-
mate the number of species such that the entropy of the asymptotic distribution
rarefied to the observed sample size equals the observed entropy of the data.
Used only for extrapolation.
SampleCoverage The sample coverage of Ns calculated elsewhere. Used to calculate the gamma
diversity of meta-communities, see details.
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
Tsallis (Havrda and Charvat, 1967; Daroczy, 1970; Tsallis, 1988) generalized entropy is a general-
ized measure of diversity (Jost, 2006).
Bias correction requires the number of individuals to estimate sample Coverage.
Correction techniques are from Chao and Shen (2003), Grassberger (1988), Holste et al. (1998),
Bonachela et al. (2008), (Marcon et al., 2014), which is actually the max value of "ChaoShen" and
"Grassberger", Zhang and Grabchak (2014), Chao and Jost (2015) and Marcon (2015).
The "ChaoJost" (Chao, Wang and Jost, 2013 for q = 1; Chao and Jost, 2015) estimator contains
an unbiased part concerning observed species, equal to that of Zhang and Grabchak (2014), and a
(biased) estimator of the remaining bias based on the estimation of the species-accumulation curve.
It is very efficient but very slow if the number of individuals is more than a few hundreds. This
estimator was named "ChaoWangJost" in previous versions of the package; its old name is still
supported for backward compatibility.
The unveiled estimators rely on Chao et al. (2015), completed by Marcon (2015). The actual
probabilities of observed species are estimated and completed by a geometric distribution of the
probabilities of unobserved species. The number of unobserved species is estimated by the Chao1
estimator ("UnveilC"), following Chao et al. (2015), or by the iChao1 ("UnveiliC") or the jacknife
("UnveilJ"). The "UnveilJ" correction often has a lower bias but a greater variance (Marcon,
2015). It is a good first choice thanks to the versatility of the jacknife estimator of richness.
The functions are designed to be used as simply as possible. Tsallis is a generic method. If its
first argument is an abundance vector, an integer vector or a numeric vector which does not sum to
1, the bias corrected function bcTsallis is called.
The size of a metacommunity (see MetaCommunity) is unknown so it has to be set according to a
rule which does not ensure that its abundances are integer values. Then, classical bias-correction
methods do not apply. Providing the SampleCoverage argument allows applying the "ChaoShen"
and "Grassberger" corrections to estimate quite well the entropy. DivPart and GammaEntropy
functions use this tweak.
Entropy can be estimated at a specified level of interpolation or extrapolation, either a chosen sample
size or sample coverage (Chao et al., 2014), rather than its asymptotic value. Special cases $q$
equals 0, 1 or 2 are treated by Richness, Shannon and Simpson functions. For extrapolation of
entropy of other values of $q$, the asymptotic distribution of the community must be estimated
by as.ProbaVector. The default arguments allow joining smoothly the extrapolated entropy and
the observed entropy by estimating the number of unobserved species so that the entropy of the
observed distribution equals the entropy of the asymptotic distribution rarefied to the actual sample
size.
Value
A named number equal to the calculated entropy. The name is that of the bias correction used.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>
(2014). Rarefaction and extrapolation with Hill numbers: A framework for sampling and estimation
in species diversity studies. Ecological Monographs, 84(1): 45-67.
<NAME>. and <NAME>. (2015) Estimating diversity and entropy profiles via discovery rates of new
species. Methods in Ecology and Evolution 6(8): 873-882.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2015) Unveiling the Species-
Rank Abundance Distribution by Generalizing Good-Turing Sample Coverage Theory. Ecology
96(5): 1189-1201.
<NAME>., <NAME>. and <NAME>. (2013). Entropy and the species accumulation curve: a novel en-
tropy estimator via discovery rates of new species. Methods in Ecology and Evolution 4(11):1091-
1100.
<NAME>. and <NAME>. (1967). Quantification method of classification processes. Concept of
structural a-entropy. Kybernetika 3(1): 30-35.
<NAME>. (1970). Generalized information functions. Information and Control 16(1): 36-51.
<NAME>. (2006). Entropy and diversity. Oikos 113(2): 363-375.
<NAME>. (2015) Practical Estimation of Diversity from Abundance Data. HAL 01212435: 1-27.
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
Tsallis, C. (1988). Possible generalization of Boltzmann-Gibbs statistics. Journal of Statistical
Physics 52(1): 479-487.
<NAME>., and <NAME>. (2016). Entropic Representation and Estimation of Diversity Indices.
Journal of Nonparametric Statistics, 28(3): 563-575.
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ns is the total number of trees per species
Ns <- as.AbdVector(Paracou618.MC$Ns)
# Species probabilities
Ps <- as.ProbaVector(Paracou618.MC$Ns)
# Whittaker plot
plot(Ns)
# Calculate entropy of order 1, i.e. Shannon's entropy
Tsallis(Ps, 1)
# Calculate it with estimation bias correction
Tsallis(Ns, 1)
TsallisBeta Tsallis beta entropy of a community
Description
Calculates the Tsallis beta entropy of order q of a community belonging to a metacommunity.
Usage
TsallisBeta(NorP, NorPexp = NULL, q = 1, ...)
bcTsallisBeta(Ns, Nexp = NULL, q, Correction = "Best", CheckArguments = TRUE)
## S3 method for class 'ProbaVector'
TsallisBeta(NorP, NorPexp = NULL, q = 1, ...,
CheckArguments = TRUE, Ps = NULL, Pexp = NULL)
## S3 method for class 'AbdVector'
TsallisBeta(NorP, NorPexp = NULL, q = 1, Correction = "Best", ...,
CheckArguments = TRUE, Ns = NULL, Nexp = NULL)
## S3 method for class 'integer'
TsallisBeta(NorP, NorPexp = NULL, q = 1, Correction = "Best", ...,
CheckArguments = TRUE, Ns = NULL, Nexp = NULL)
## S3 method for class 'numeric'
TsallisBeta(NorP, NorPexp = NULL, q = 1, Correction = "Best", ...,
CheckArguments = TRUE, Ps = NULL, Ns = NULL, Pexp = NULL, Nexp = NULL)
Arguments
Ps The probability vector of species of the community.
Pexp The probability vector of species of the metacommunity.
Ns A numeric vector containing species abundances of the community.
Nexp A numeric vector containing species abundances of the metacommunity.
NorP A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities of the
community.
NorPexp A numeric vector, an integer vector, an abundance vector (AbdVector) or a prob-
ability vector (ProbaVector). Contains either abundances or probabilities of the
metacommunity.
q A number: the order of entropy. Default is 1 for Shannon entropy.
Correction A string containing one of the possible corrections: currently, only "ChaoShen"
or "None". "Best" is the default value, it is equivalent to "ChaoShen".
... Additional arguments. Unused.
CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to
save time when the arguments have been checked elsewhere.
Details
The derivation of Tsallis beta entropy can be found in Marcon et al. (2014).
Bias correction requires the number of individuals to estimate sample Coverage. Use bcTsallisBeta
and choose the Correction.
Note that beta entropy value is related to alpha entropy (if q is not 1) and cannot be compared
accross communities (Jost, 2007). Beta entropy of a community is not meaningful in general, do
rather calculate the BetaDiversity of the metacommunity.
The functions are designed to be used as simply as possible. TsallisBeta is a generic method.
If its first argument is an abundance vector, an integer vector or a numeric vector which does not
sum to 1, the bias corrected function bcTsallisBeta is called. Explicit calls to bcTsallisBeta
(with bias correction) or to TsallisBeta.ProbaVector (without correction) are possible to avoid
ambiguity. The .integer and .numeric methods accept Ps or Ns arguments instead of NorP for
backward compatibility.
Value
A named number equal to the calculated entropy. The name is that of the bias correction used.
References
Jost (2007), Partitioning diversity into independent alpha and beta components. Ecology 88(10):
2427-2439.
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). Generalization of the partitioning
of Shannon diversity. PLOS One 9(3): e90289.
Examples
# Load Paracou data (number of trees per species in two 1-ha plot of a tropical forest)
data(Paracou618)
# Ps is the vector of probabilities
Ps <- Paracou618.MC$Ps
# Probability distribution of the first plot
Ps1 <- Paracou618.MC$Psi[, 1]
# Divergence of order 2 between plot 1 and the whole forest
TsallisBeta(Ps1, Ps, 2)
# Ns is the vector of abundances of the metacommunity
Ns <- Paracou618.MC$Ns
# Abundances in the first plot
Ns1 <- Paracou618.MC$Nsi[, 1]
# Divergence of order 2 between plot 1 and the whole forest, with bias correction
bcTsallisBeta(Ns1, Ns, 2) |
countgmifs | cran | R | Package ‘countgmifs’
October 12, 2022
Title Discrete Response Regression for High-Dimensional Data
Version 0.0.2
Description Provides a function for fitting Poisson and negative binomial regression mod-
els when the number of parameters exceeds the sample size, using the the generalized mono-
tone incremental forward stagewise method.
Depends R (>= 3.5.0), MASS
License GPL (>= 2)
Encoding UTF-8
LazyData true
RoxygenNote 6.0.1.9000
NeedsCompilation no
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2020-01-08 14:20:02 UTC
R topics documented:
countgmifs-packag... 2
coef.countgmif... 3
countgmif... 4
plot.countgmif... 5
predict.countgmif... 6
print.countgmif... 7
summary.countgmif... 7
countgmifs-package Discrete Response Regression for High-Dimensional Data: Discrete
Response Generalized Monotone Incremental Forward Stagewise Re-
gression
Description
This package provides a function that fits a Poisson or negative binomial model when the number
of parameters exceeds the sample size, using the the generalized monotone incremental forward
stagewise method.
Details
The DESCRIPTION file:
Package: countgmifs
Title: Discrete Response Regression for High-Dimensional Data
Version: 0.0.2
Authors@R: person("Kellie", "Archer", email = "<EMAIL>", role = c("aut", "cre"))
Description: Provides a function for fitting Poisson and negative binomial regression models when the number of paramet
Depends: R (>= 3.5.0), MASS
License: GPL (>=2)
Encoding: UTF-8
LazyData: true
RoxygenNote: 6.0.1.9000
Author: <NAME> [aut, cre]
Maintainer: <NAME> <<EMAIL>>
Index of help topics:
coef.countgmifs Extract Model Coefficients.
countgmifs Discrete Response Generalized Monotone
Incremental Forward Stagewise Regression.
countgmifs-package Discrete Response Regression for
High-Dimensional Data: Discrete Response
Generalized Monotone Incremental Forward
Stagewise Regression
plot.countgmifs Plot Solution Path for a Count GMIFS Fitted
Model.
predict.countgmifs Predict Outcome for Count GMIFS Fitted Model.
print.countgmifs Print the Contents of a Count GMIFS Fitted
Object.
summary.countgmifs Summarize a Count GMIFS Object.
This package contains functions for fitting a penalized discrete response model (either negative
binomial or Poisson) and extracting estimated coefficients, predictions, and plots. The model and
methods can be used when the response to be predicted is discrete, and is particularly relevant when
there are more covariates than observations.
Author(s)
<NAME> <<EMAIL>>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>. (2015) Generalized monotone incremental forward stagewise method for
modeling count data: application predicting micronuclei frequency. Cancer Informatics, 14(Suppl
2), 97–105.
coef.countgmifs Extract Model Coefficients.
Description
A generic function which extracts the model coefficients from a fitted model object fit using countgmifs
Usage
## S3 method for class 'countgmifs'
coef(object, model.select = "BIC", ...)
Arguments
object an countgmifs fitted object.
model.select when x is specified any model along the solution path can be selected. The
default is model.select="BIC" which calculates the predicted values using the
coefficients from the model having the lowest BIC. Other options are model.select="AIC"
or any numeric value from the solution path.
... other arguments.
See Also
See Also countgmifs, predict.countgmifs, summary.countgmifs, plot.countgmifs
countgmifs Discrete Response Generalized Monotone Incremental Forward
Stagewise Regression.
Description
This function can fit a Poisson or negative binomial model when the number of parameters exceeds
the sample size, using the the generalized monotone incremental forward stagewise method.
Usage
countgmifs(formula, data, x = NULL, offset, subset, epsilon = 0.001,
tol = 1e-05, scale = TRUE, verbose = FALSE, family = "nb", ...)
Arguments
formula an object of class "formula" (or one that can be coerced to that class): a sym-
bolic description of the model to be fitted. The left side of the formula is the
ordinal outcome while the variables on the right side of the formula are the
covariates that are not included in the penalization process. Note that if all vari-
ables in the model are to be penalized, an intercept only model formula should
be specified.
data an optional data frame, list or environment (or object coercible by as.data.frame
to a data frame) containing the variables in the model.
x an optional matrix of predictors that are to be penalized in the model fitting
process.
offset this can be used to specify an a priori known component to be included during
fitting (e.g., denominator term). This should be NULL or a numeric vector of
length equal to the number of cases.
subset an optional vector specifying a subset of observations to be used in the fitting
process.
epsilon small incremental amount used to update a coefficient at a given step.
tol the iterative process stops when the difference between successive log-likelihoods
is less than this specified level of tolerance.
scale logical, if TRUE (default) the penalized predictors are centered and scaled.
verbose logical, if TRUE the step number is printed to the console (default is FALSE).
family the type of count response model to be fit. Default is ’nb’ for negative binomial;
user can also specify ’poisson’.
... other arguments.
See Also
See Also coef.countgmifs, summary.countgmifs, predict.countgmifs, plot.countgmifs
Examples
set.seed(26)
n <- 50 # Sample size
p <- 500 # Number of covariates
intercept<- .5
#True parameter values for the 500 covariates
beta<- c(log(1.5), log(1.5), -log(1.5), -log(1.5), -log(1.5), rep(0,495))
alpha<- 0.5 # Intercept
x<- matrix(rnorm(n*p,0,1), nrow=n, ncol=p, byrow=TRUE) #Covariate values
colnames(x)<- paste("Var",1:p, sep="")
mu<- exp(intercept + crossprod(t(x),beta))
y<- rnbinom(n=n, size=1/alpha ,mu=mu) # Discrete response
data<- data.frame(y,x)
nb<-countgmifs(y ~ 1 , data=data, offset=NULL, x=x, epsilon=0.01, tol=0.001,
scale=TRUE, verbose=FALSE)
coef.AIC<-coef(nb, model.select="AIC")
coef.AIC[coef.AIC!=0]
predict(nb, model.select="AIC")
plot(predict(nb, model.select="AIC"), y)
plot(nb)
plot.countgmifs Plot Solution Path for a Count GMIFS Fitted Model.
Description
This function plots either the coefficient path, the AIC, or the log-likelihood for a fitted countgmifs
object.
Usage
## S3 method for class 'countgmifs'
plot(x, type = "trace", xlab = NULL, ylab = NULL,
main = NULL, ...)
Arguments
x a countgmifs object.
type default is "trace" which plots the coefficient path for the fitted object. Also
available are "AIC", "BIC", and "logLik".
xlab a default x-axis label will be used which can be changed by specifying a user-
defined x-axis label.
ylab a default y-axis label will be used which can be changed by specifying a user-
defined y-axis label.
main a default main title will be used which can be changed by specifying a user-
defined main title.
... other arguments.
See Also
See Also countgmifs, coef.countgmifs, summary.countgmifs, predict.countgmifs
predict.countgmifs Predict Outcome for Count GMIFS Fitted Model.
Description
This function returns a numeric vector that is the predicted response from the countgmifs fitted
object.
Usage
## S3 method for class 'countgmifs'
predict(object, neww = NULL, newdata, newx = NULL,
model.select = "BIC", newoffset=NULL, ...)
Arguments
object an ordinalgmifs fitted object.
neww an optional formula that includes the unpenalized variables to use for predicting
the response. If omitted, the training data are used.
newdata an optional data.frame that minimally includes the unpenalized variables to use
for predicting the response. If omitted, the training data are used.
newx an optional matrix of penalized variables to use for predicting the response. If
omitted, the training data are used.
model.select when x is specified any model along the solution path can be selected. The
default is model.select="BIC" which calculates the predicted values using the
coefficients from the model having the lowest BIC. Other options are model.select="AIC"
or any numeric value from the solution path.
newoffset If an offset is used in the fit, then one must be supplied for making predictions.
... other arguments.
See Also
See Also countgmifs, coef.countgmifs, summary.countgmifs, plot.countgmifs
print.countgmifs Print the Contents of a Count GMIFS Fitted Object.
Description
This function prints the names of the list objects from an countgmifs fitted model
Usage
## S3 method for class 'countgmifs'
print(x, ...)
Arguments
x an countgmifs fitted object.
... other arguments.
See Also
See Also countgmifs, coef.countgmifs, summary.countgmifs, plot.countgmifs
summary.countgmifs Summarize a Count GMIFS Object.
Description
Prints the following items extracted from the fitted countgmifs object: the family used and model
parameter estimates. For models that include x, the parameter estimates, AIC, BIC, and log-
likelihood are printed for indicated model.select step or if model.select is not supplied the
step at which the minimum BIC was observed.
Usage
## S3 method for class 'countgmifs'
summary(object, model.select = "BIC", ...)
Arguments
object an countgmifs fitted object.
model.select when x is specified any model along the solution path can be selected. The
default is model.select="BIC" which calculates the predicted values using the
coefficients from the model having the lowest BIC. Other options are model.select="AIC"
or any numeric value from the solution path.
... other arguments.
See Also
See Also countgmifs, coef.countgmifs, predict.countgmifs, plot.countgmifs |
aws-sdk-budgets | rust | Rust | Crate aws_sdk_budgets
===
**Please Note: The SDK is currently in Developer Preview and is intended strictly for feedback purposes only. Do not use this SDK for production workloads.**
Use the Amazon Web Services Budgets API to plan your service usage, service costs, and instance reservations. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for the Amazon Web Services Budgets feature.
Budgets provide you with a way to see the following information:
* How close your plan is to your budgeted amount or to the free tier limits
* Your usage-to-date, including how much you’ve used of your Reserved Instances (RIs)
* Your current estimated charges from Amazon Web Services, and how much your predicted usage will accrue in charges by the end of the month
* How much of your budget has been used
Amazon Web Services updates your budget status several times a day. Budgets track your unblended costs, subscriptions, refunds, and RIs. You can create the following types of budgets:
* **Cost budgets** - Plan how much you want to spend on a service.
* **Usage budgets** - Plan how much you want to use one or more services.
* **RI utilization budgets** - Define a utilization threshold, and receive alerts when your RI usage falls below that threshold. This lets you see if your RIs are unused or under-utilized.
* **RI coverage budgets** - Define a coverage threshold, and receive alerts when the number of your instance hours that are covered by RIs fall below that threshold. This lets you see how much of your instance usage is covered by a reservation.
Service Endpoint
The Amazon Web Services Budgets API provides the following endpoint:
* https://budgets.amazonaws.com
For information about costs that are associated with the Amazon Web Services Budgets API, see Amazon Web Services Cost Management Pricing.
### Getting Started
> Examples are available for many services and operations, check out the
> examples folder in GitHub.
The SDK provides one crate per AWS service. You must add Tokio as a dependency within your Rust project to execute asynchronous code. To add `aws-sdk-budgets` to your project, add the following to your **Cargo.toml** file:
```
[dependencies]
aws-config = "0.56.1"
aws-sdk-budgets = "0.33.0"
tokio = { version = "1", features = ["full"] }
```
Then in code, a client can be created with the following:
```
use aws_sdk_budgets as budgets;
#[::tokio::main]
async fn main() -> Result<(), budgets::Error> {
let config = aws_config::load_from_env().await;
let client = aws_sdk_budgets::Client::new(&config);
// ... make some calls with the client
Ok(())
}
```
See the client documentation for information on what calls can be made, and the inputs and outputs for each of those calls.
### Using the SDK
Until the SDK is released, we will be adding information about using the SDK to the Developer Guide. Feel free to suggest additional sections for the guide by opening an issue and describing what you are trying to do.
### Getting Help
* GitHub discussions - For ideas, RFCs & general questions
* GitHub issues - For bug reports & feature requests
* Generated Docs (latest version)
* Usage examples
Crate Organization
---
The entry point for most customers will be `Client`, which exposes one method for each API offered by AWS Budgets. The return value of each of these methods is a “fluent builder”,
where the different inputs for that API are added by builder-style function call chaining,
followed by calling `send()` to get a `Future` that will result in either a successful output or a `SdkError`.
Some of these API inputs may be structs or enums to provide more complex structured information.
These structs and enums live in `types`. There are some simpler types for representing data such as date times or binary blobs that live in `primitives`.
All types required to configure a client via the `Config` struct live in `config`.
The `operation` module has a submodule for every API, and in each submodule is the input, output, and error type for that API, as well as builders to construct each of those.
There is a top-level `Error` type that encompasses all the errors that the client can return. Any other error type can be converted to this `Error` type via the
`From` trait.
The other modules within this crate are not required for normal usage.
Modules
---
* clientClient for calling AWS Budgets.
* configConfiguration for AWS Budgets.
* errorCommon errors and error handling utilities.
* metaInformation about this crate.
* operationAll operations that this crate can perform.
* primitivesPrimitives such as `Blob` or `DateTime` used by other types.
* typesData structures used by operation inputs/outputs.
Structs
---
* ClientClient for AWS Budgets
* ConfigConfiguration for a aws_sdk_budgets service client.
Enums
---
* ErrorAll possible error types for this service.
Crate aws_sdk_budgets
===
**Please Note: The SDK is currently in Developer Preview and is intended strictly for feedback purposes only. Do not use this SDK for production workloads.**
Use the Amazon Web Services Budgets API to plan your service usage, service costs, and instance reservations. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for the Amazon Web Services Budgets feature.
Budgets provide you with a way to see the following information:
* How close your plan is to your budgeted amount or to the free tier limits
* Your usage-to-date, including how much you’ve used of your Reserved Instances (RIs)
* Your current estimated charges from Amazon Web Services, and how much your predicted usage will accrue in charges by the end of the month
* How much of your budget has been used
Amazon Web Services updates your budget status several times a day. Budgets track your unblended costs, subscriptions, refunds, and RIs. You can create the following types of budgets:
* **Cost budgets** - Plan how much you want to spend on a service.
* **Usage budgets** - Plan how much you want to use one or more services.
* **RI utilization budgets** - Define a utilization threshold, and receive alerts when your RI usage falls below that threshold. This lets you see if your RIs are unused or under-utilized.
* **RI coverage budgets** - Define a coverage threshold, and receive alerts when the number of your instance hours that are covered by RIs fall below that threshold. This lets you see how much of your instance usage is covered by a reservation.
Service Endpoint
The Amazon Web Services Budgets API provides the following endpoint:
* https://budgets.amazonaws.com
For information about costs that are associated with the Amazon Web Services Budgets API, see Amazon Web Services Cost Management Pricing.
### Getting Started
> Examples are available for many services and operations, check out the
> examples folder in GitHub.
The SDK provides one crate per AWS service. You must add Tokio as a dependency within your Rust project to execute asynchronous code. To add `aws-sdk-budgets` to your project, add the following to your **Cargo.toml** file:
```
[dependencies]
aws-config = "0.56.1"
aws-sdk-budgets = "0.33.0"
tokio = { version = "1", features = ["full"] }
```
Then in code, a client can be created with the following:
```
use aws_sdk_budgets as budgets;
#[::tokio::main]
async fn main() -> Result<(), budgets::Error> {
let config = aws_config::load_from_env().await;
let client = aws_sdk_budgets::Client::new(&config);
// ... make some calls with the client
Ok(())
}
```
See the client documentation for information on what calls can be made, and the inputs and outputs for each of those calls.
### Using the SDK
Until the SDK is released, we will be adding information about using the SDK to the Developer Guide. Feel free to suggest additional sections for the guide by opening an issue and describing what you are trying to do.
### Getting Help
* GitHub discussions - For ideas, RFCs & general questions
* GitHub issues - For bug reports & feature requests
* Generated Docs (latest version)
* Usage examples
Crate Organization
---
The entry point for most customers will be `Client`, which exposes one method for each API offered by AWS Budgets. The return value of each of these methods is a “fluent builder”,
where the different inputs for that API are added by builder-style function call chaining,
followed by calling `send()` to get a `Future` that will result in either a successful output or a `SdkError`.
Some of these API inputs may be structs or enums to provide more complex structured information.
These structs and enums live in `types`. There are some simpler types for representing data such as date times or binary blobs that live in `primitives`.
All types required to configure a client via the `Config` struct live in `config`.
The `operation` module has a submodule for every API, and in each submodule is the input, output, and error type for that API, as well as builders to construct each of those.
There is a top-level `Error` type that encompasses all the errors that the client can return. Any other error type can be converted to this `Error` type via the
`From` trait.
The other modules within this crate are not required for normal usage.
Modules
---
* clientClient for calling AWS Budgets.
* configConfiguration for AWS Budgets.
* errorCommon errors and error handling utilities.
* metaInformation about this crate.
* operationAll operations that this crate can perform.
* primitivesPrimitives such as `Blob` or `DateTime` used by other types.
* typesData structures used by operation inputs/outputs.
Structs
---
* ClientClient for AWS Budgets
* ConfigConfiguration for a aws_sdk_budgets service client.
Enums
---
* ErrorAll possible error types for this service.
Struct aws_sdk_budgets::client::Client
===
```
pub struct Client { /* private fields */ }
```
Client for AWS Budgets
Client for invoking operations on AWS Budgets. Each operation on AWS Budgets is a method on this this struct. `.send()` MUST be invoked on the generated operations to dispatch the request to the service.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_budgets::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_budgets::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `CreateBudget` operation has a `Client::create_budget`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.create_budget()
.account_id("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Implementations
---
### impl Client
#### pub fn create_budget(&self) -> CreateBudgetFluentBuilder
Constructs a fluent builder for the `CreateBudget` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget.
+ `budget(Budget)` / `set_budget(Option<Budget>)`: The budget object that you want to create.
+ `notifications_with_subscribers(NotificationWithSubscribers)` / `set_notifications_with_subscribers(Option<Vec<NotificationWithSubscribers>>)`: A notification that you want to associate with a budget. A budget can have up to five notifications, and each notification can have one SNS subscriber and up to 10 email subscribers. If you include notifications and subscribers in your `CreateBudget` call, Amazon Web Services creates the notifications and subscribers for you.
* On success, responds with `CreateBudgetOutput`
* On failure, responds with `SdkError<CreateBudgetError>`
### impl Client
#### pub fn create_budget_action(&self) -> CreateBudgetActionFluentBuilder
Constructs a fluent builder for the `CreateBudgetAction` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `notification_type(NotificationType)` / `set_notification_type(Option<NotificationType>)`: The type of a notification. It must be ACTUAL or FORECASTED.
+ `action_type(ActionType)` / `set_action_type(Option<ActionType>)`: The type of action. This defines the type of tasks that can be carried out by this action. This field also determines the format for definition.
+ `action_threshold(ActionThreshold)` / `set_action_threshold(Option<ActionThreshold>)`: The trigger threshold of the action.
+ `definition(Definition)` / `set_definition(Option<Definition>)`: Specifies all of the type-specific parameters.
+ `execution_role_arn(impl Into<String>)` / `set_execution_role_arn(Option<String>)`: The role passed for action execution and reversion. Roles and actions must be in the same account.
+ `approval_model(ApprovalModel)` / `set_approval_model(Option<ApprovalModel>)`: This specifies if the action needs manual or automatic approval.
+ `subscribers(Subscriber)` / `set_subscribers(Option<Vec<Subscriber>>)`: A list of subscribers.
* On success, responds with `CreateBudgetActionOutput` with field(s):
+ `account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
* On failure, responds with `SdkError<CreateBudgetActionError>`
### impl Client
#### pub fn create_notification(&self) -> CreateNotificationFluentBuilder
Constructs a fluent builder for the `CreateNotification` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget that you want to create a notification for.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget that you want Amazon Web Services to notify you about. Budget names must be unique within an account.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification that you want to create.
+ `subscribers(Subscriber)` / `set_subscribers(Option<Vec<Subscriber>>)`: A list of subscribers that you want to associate with the notification. Each notification can have one SNS subscriber and up to 10 email subscribers.
* On success, responds with `CreateNotificationOutput`
* On failure, responds with `SdkError<CreateNotificationError>`
### impl Client
#### pub fn create_subscriber(&self) -> CreateSubscriberFluentBuilder
Constructs a fluent builder for the `CreateSubscriber` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget that you want to create a subscriber for.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget that you want to subscribe to. Budget names must be unique within an account.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification that you want to create a subscriber for.
+ `subscriber(Subscriber)` / `set_subscriber(Option<Subscriber>)`: The subscriber that you want to associate with a budget notification.
* On success, responds with `CreateSubscriberOutput`
* On failure, responds with `SdkError<CreateSubscriberError>`
### impl Client
#### pub fn delete_budget(&self) -> DeleteBudgetFluentBuilder
Constructs a fluent builder for the `DeleteBudget` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget that you want to delete.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget that you want to delete.
* On success, responds with `DeleteBudgetOutput`
* On failure, responds with `SdkError<DeleteBudgetError>`
### impl Client
#### pub fn delete_budget_action(&self) -> DeleteBudgetActionFluentBuilder
Constructs a fluent builder for the `DeleteBudgetAction` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(impl Into<String>)` / `set_action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
* On success, responds with `DeleteBudgetActionOutput` with field(s):
+ `account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action(Option<Action>)`: A budget action resource.
* On failure, responds with `SdkError<DeleteBudgetActionError>`
### impl Client
#### pub fn delete_notification(&self) -> DeleteNotificationFluentBuilder
Constructs a fluent builder for the `DeleteNotification` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose notification you want to delete.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose notification you want to delete.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification that you want to delete.
* On success, responds with `DeleteNotificationOutput`
* On failure, responds with `SdkError<DeleteNotificationError>`
### impl Client
#### pub fn delete_subscriber(&self) -> DeleteSubscriberFluentBuilder
Constructs a fluent builder for the `DeleteSubscriber` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose subscriber you want to delete.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose subscriber you want to delete.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification whose subscriber you want to delete.
+ `subscriber(Subscriber)` / `set_subscriber(Option<Subscriber>)`: The subscriber that you want to delete.
* On success, responds with `DeleteSubscriberOutput`
* On failure, responds with `SdkError<DeleteSubscriberError>`
### impl Client
#### pub fn describe_budget(&self) -> DescribeBudgetFluentBuilder
Constructs a fluent builder for the `DescribeBudget` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget that you want a description of.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget that you want a description of.
* On success, responds with `DescribeBudgetOutput` with field(s):
+ `budget(Option<Budget>)`: The description of the budget.
* On failure, responds with `SdkError<DescribeBudgetError>`
### impl Client
#### pub fn describe_budget_action(&self) -> DescribeBudgetActionFluentBuilder
Constructs a fluent builder for the `DescribeBudgetAction` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(impl Into<String>)` / `set_action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
* On success, responds with `DescribeBudgetActionOutput` with field(s):
+ `account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action(Option<Action>)`: A budget action resource.
* On failure, responds with `SdkError<DescribeBudgetActionError>`
### impl Client
#### pub fn describe_budget_action_histories(
&self
) -> DescribeBudgetActionHistoriesFluentBuilder
Constructs a fluent builder for the `DescribeBudgetActionHistories` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(impl Into<String>)` / `set_action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
+ `time_period(TimePeriod)` / `set_time_period(Option<TimePeriod>)`: The period of time that’s covered by a budget. The period has a start date and an end date. The start date must come before the end date. There are no restrictions on the end date.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many entries a paginated response contains. The maximum is 100.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A generic string.
* On success, responds with `DescribeBudgetActionHistoriesOutput` with field(s):
+ `action_histories(Option<Vec<ActionHistory>>)`: The historical record of the budget action resource.
+ `next_token(Option<String>)`: A generic string.
* On failure, responds with `SdkError<DescribeBudgetActionHistoriesError>`
### impl Client
#### pub fn describe_budget_actions_for_account(
&self
) -> DescribeBudgetActionsForAccountFluentBuilder
Constructs a fluent builder for the `DescribeBudgetActionsForAccount` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many entries a paginated response contains. The maximum is 100.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A generic string.
* On success, responds with `DescribeBudgetActionsForAccountOutput` with field(s):
+ `actions(Option<Vec<Action>>)`: A list of the budget action resources information.
+ `next_token(Option<String>)`: A generic string.
* On failure, responds with `SdkError<DescribeBudgetActionsForAccountError>`
### impl Client
#### pub fn describe_budget_actions_for_budget(
&self
) -> DescribeBudgetActionsForBudgetFluentBuilder
Constructs a fluent builder for the `DescribeBudgetActionsForBudget` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many entries a paginated response contains. The maximum is 100.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A generic string.
* On success, responds with `DescribeBudgetActionsForBudgetOutput` with field(s):
+ `actions(Option<Vec<Action>>)`: A list of the budget action resources information.
+ `next_token(Option<String>)`: A generic string.
* On failure, responds with `SdkError<DescribeBudgetActionsForBudgetError>`
### impl Client
#### pub fn describe_budget_notifications_for_account(
&self
) -> DescribeBudgetNotificationsForAccountFluentBuilder
Constructs a fluent builder for the `DescribeBudgetNotificationsForAccount` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many budgets a paginated response contains. The default is 50.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A generic string.
* On success, responds with `DescribeBudgetNotificationsForAccountOutput` with field(s):
+ `budget_notifications_for_account(Option<Vec<BudgetNotificationsForAccount>>)`: A list of budget names and associated notifications for an account.
+ `next_token(Option<String>)`: A generic string.
* On failure, responds with `SdkError<DescribeBudgetNotificationsForAccountError>`
### impl Client
#### pub fn describe_budget_performance_history(
&self
) -> DescribeBudgetPerformanceHistoryFluentBuilder
Constructs a fluent builder for the `DescribeBudgetPerformanceHistory` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `time_period(TimePeriod)` / `set_time_period(Option<TimePeriod>)`: Retrieves how often the budget went into an `ALARM` state for the specified time period.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many entries a paginated response contains. The maximum is 100.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A generic string.
* On success, responds with `DescribeBudgetPerformanceHistoryOutput` with field(s):
+ `budget_performance_history(Option<BudgetPerformanceHistory>)`: The history of how often the budget has gone into an `ALARM` state.
For `DAILY` budgets, the history saves the state of the budget for the last 60 days. For `MONTHLY` budgets, the history saves the state of the budget for the current month plus the last 12 months. For `QUARTERLY` budgets, the history saves the state of the budget for the last four quarters.
+ `next_token(Option<String>)`: A generic string.
* On failure, responds with `SdkError<DescribeBudgetPerformanceHistoryError>`
### impl Client
#### pub fn describe_budgets(&self) -> DescribeBudgetsFluentBuilder
Constructs a fluent builder for the `DescribeBudgets` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budgets that you want to describe.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many budgets a paginated response contains. The default is 100.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The pagination token that you include in your request to indicate the next set of results that you want to retrieve.
* On success, responds with `DescribeBudgetsOutput` with field(s):
+ `budgets(Option<Vec<Budget>>)`: A list of budgets.
+ `next_token(Option<String>)`: The pagination token in the service response that indicates the next set of results that you can retrieve.
* On failure, responds with `SdkError<DescribeBudgetsError>`
### impl Client
#### pub fn describe_notifications_for_budget(
&self
) -> DescribeNotificationsForBudgetFluentBuilder
Constructs a fluent builder for the `DescribeNotificationsForBudget` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose notifications you want descriptions of.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose notifications you want descriptions of.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An optional integer that represents how many entries a paginated response contains.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The pagination token that you include in your request to indicate the next set of results that you want to retrieve.
* On success, responds with `DescribeNotificationsForBudgetOutput` with field(s):
+ `notifications(Option<Vec<Notification>>)`: A list of notifications that are associated with a budget.
+ `next_token(Option<String>)`: The pagination token in the service response that indicates the next set of results that you can retrieve.
* On failure, responds with `SdkError<DescribeNotificationsForBudgetError>`
### impl Client
#### pub fn describe_subscribers_for_notification(
&self
) -> DescribeSubscribersForNotificationFluentBuilder
Constructs a fluent builder for the `DescribeSubscribersForNotification` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose subscribers you want descriptions of.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose subscribers you want descriptions of.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification whose subscribers you want to list.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An optional integer that represents how many entries a paginated response contains.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The pagination token that you include in your request to indicate the next set of results that you want to retrieve.
* On success, responds with `DescribeSubscribersForNotificationOutput` with field(s):
+ `subscribers(Option<Vec<Subscriber>>)`: A list of subscribers that are associated with a notification.
+ `next_token(Option<String>)`: The pagination token in the service response that indicates the next set of results that you can retrieve.
* On failure, responds with `SdkError<DescribeSubscribersForNotificationError>`
### impl Client
#### pub fn execute_budget_action(&self) -> ExecuteBudgetActionFluentBuilder
Constructs a fluent builder for the `ExecuteBudgetAction` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(impl Into<String>)` / `set_action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
+ `execution_type(ExecutionType)` / `set_execution_type(Option<ExecutionType>)`: The type of execution.
* On success, responds with `ExecuteBudgetActionOutput` with field(s):
+ `account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
+ `execution_type(Option<ExecutionType>)`: The type of execution.
* On failure, responds with `SdkError<ExecuteBudgetActionError>`
### impl Client
#### pub fn update_budget(&self) -> UpdateBudgetFluentBuilder
Constructs a fluent builder for the `UpdateBudget` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget that you want to update.
+ `new_budget(Budget)` / `set_new_budget(Option<Budget>)`: The budget that you want to update your budget to.
* On success, responds with `UpdateBudgetOutput`
* On failure, responds with `SdkError<UpdateBudgetError>`
### impl Client
#### pub fn update_budget_action(&self) -> UpdateBudgetActionFluentBuilder
Constructs a fluent builder for the `UpdateBudgetAction` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(impl Into<String>)` / `set_action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
+ `notification_type(NotificationType)` / `set_notification_type(Option<NotificationType>)`: The type of a notification. It must be ACTUAL or FORECASTED.
+ `action_threshold(ActionThreshold)` / `set_action_threshold(Option<ActionThreshold>)`: The trigger threshold of the action.
+ `definition(Definition)` / `set_definition(Option<Definition>)`: Specifies all of the type-specific parameters.
+ `execution_role_arn(impl Into<String>)` / `set_execution_role_arn(Option<String>)`: The role passed for action execution and reversion. Roles and actions must be in the same account.
+ `approval_model(ApprovalModel)` / `set_approval_model(Option<ApprovalModel>)`: This specifies if the action needs manual or automatic approval.
+ `subscribers(Subscriber)` / `set_subscribers(Option<Vec<Subscriber>>)`: A list of subscribers.
* On success, responds with `UpdateBudgetActionOutput` with field(s):
+ `account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `old_action(Option<Action>)`: The previous action resource information.
+ `new_action(Option<Action>)`: The updated action resource information.
* On failure, responds with `SdkError<UpdateBudgetActionError>`
### impl Client
#### pub fn update_notification(&self) -> UpdateNotificationFluentBuilder
Constructs a fluent builder for the `UpdateNotification` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose notification you want to update.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose notification you want to update.
+ `old_notification(Notification)` / `set_old_notification(Option<Notification>)`: The previous notification that is associated with a budget.
+ `new_notification(Notification)` / `set_new_notification(Option<Notification>)`: The updated notification to be associated with a budget.
* On success, responds with `UpdateNotificationOutput`
* On failure, responds with `SdkError<UpdateNotificationError>`
### impl Client
#### pub fn update_subscriber(&self) -> UpdateSubscriberFluentBuilder
Constructs a fluent builder for the `UpdateSubscriber` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose subscriber you want to update.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose subscriber you want to update.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification whose subscriber you want to update.
+ `old_subscriber(Subscriber)` / `set_old_subscriber(Option<Subscriber>)`: The previous subscriber that is associated with a budget notification.
+ `new_subscriber(Subscriber)` / `set_new_subscriber(Option<Subscriber>)`: The updated subscriber that is associated with a budget notification.
* On success, responds with `UpdateSubscriberOutput`
* On failure, responds with `SdkError<UpdateSubscriberError>`
### impl Client
#### pub fn from_conf(conf: Config) -> Self
Creates a new client from the service `Config`.
##### Panics
This method will panic if the `conf` has retry or timeouts enabled without a `sleep_impl`.
If you experience this panic, it can be fixed by setting the `sleep_impl`, or by disabling retries and timeouts.
#### pub fn config(&self) -> &Config
Returns the client’s configuration.
### impl Client
#### pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
##### Panics
* This method will panic if the `sdk_config` is missing an async sleep implementation. If you experience this panic, set the `sleep_impl` on the Config passed into this function to fix it.
* This method will panic if the `sdk_config` is missing an HTTP connector. If you experience this panic, set the
`http_connector` on the Config passed into this function to fix it.
Trait Implementations
---
### impl Clone for Client
#### fn clone(&self) -> Client
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Client
### impl Send for Client
### impl Sync for Client
### impl Unpin for Client
### impl !UnwindSafe for Client
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct aws_sdk_budgets::Client
===
```
pub struct Client { /* private fields */ }
```
Client for AWS Budgets
Client for invoking operations on AWS Budgets. Each operation on AWS Budgets is a method on this this struct. `.send()` MUST be invoked on the generated operations to dispatch the request to the service.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_budgets::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_budgets::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `CreateBudget` operation has a `Client::create_budget`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.create_budget()
.account_id("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Implementations
---
### impl Client
#### pub fn create_budget(&self) -> CreateBudgetFluentBuilder
Constructs a fluent builder for the `CreateBudget` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget.
+ `budget(Budget)` / `set_budget(Option<Budget>)`: The budget object that you want to create.
+ `notifications_with_subscribers(NotificationWithSubscribers)` / `set_notifications_with_subscribers(Option<Vec<NotificationWithSubscribers>>)`: A notification that you want to associate with a budget. A budget can have up to five notifications, and each notification can have one SNS subscriber and up to 10 email subscribers. If you include notifications and subscribers in your `CreateBudget` call, Amazon Web Services creates the notifications and subscribers for you.
* On success, responds with `CreateBudgetOutput`
* On failure, responds with `SdkError<CreateBudgetError>`
### impl Client
#### pub fn create_budget_action(&self) -> CreateBudgetActionFluentBuilder
Constructs a fluent builder for the `CreateBudgetAction` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `notification_type(NotificationType)` / `set_notification_type(Option<NotificationType>)`: The type of a notification. It must be ACTUAL or FORECASTED.
+ `action_type(ActionType)` / `set_action_type(Option<ActionType>)`: The type of action. This defines the type of tasks that can be carried out by this action. This field also determines the format for definition.
+ `action_threshold(ActionThreshold)` / `set_action_threshold(Option<ActionThreshold>)`: The trigger threshold of the action.
+ `definition(Definition)` / `set_definition(Option<Definition>)`: Specifies all of the type-specific parameters.
+ `execution_role_arn(impl Into<String>)` / `set_execution_role_arn(Option<String>)`: The role passed for action execution and reversion. Roles and actions must be in the same account.
+ `approval_model(ApprovalModel)` / `set_approval_model(Option<ApprovalModel>)`: This specifies if the action needs manual or automatic approval.
+ `subscribers(Subscriber)` / `set_subscribers(Option<Vec<Subscriber>>)`: A list of subscribers.
* On success, responds with `CreateBudgetActionOutput` with field(s):
+ `account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
* On failure, responds with `SdkError<CreateBudgetActionError>`
### impl Client
#### pub fn create_notification(&self) -> CreateNotificationFluentBuilder
Constructs a fluent builder for the `CreateNotification` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget that you want to create a notification for.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget that you want Amazon Web Services to notify you about. Budget names must be unique within an account.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification that you want to create.
+ `subscribers(Subscriber)` / `set_subscribers(Option<Vec<Subscriber>>)`: A list of subscribers that you want to associate with the notification. Each notification can have one SNS subscriber and up to 10 email subscribers.
* On success, responds with `CreateNotificationOutput`
* On failure, responds with `SdkError<CreateNotificationError>`
### impl Client
#### pub fn create_subscriber(&self) -> CreateSubscriberFluentBuilder
Constructs a fluent builder for the `CreateSubscriber` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget that you want to create a subscriber for.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget that you want to subscribe to. Budget names must be unique within an account.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification that you want to create a subscriber for.
+ `subscriber(Subscriber)` / `set_subscriber(Option<Subscriber>)`: The subscriber that you want to associate with a budget notification.
* On success, responds with `CreateSubscriberOutput`
* On failure, responds with `SdkError<CreateSubscriberError>`
### impl Client
#### pub fn delete_budget(&self) -> DeleteBudgetFluentBuilder
Constructs a fluent builder for the `DeleteBudget` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget that you want to delete.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget that you want to delete.
* On success, responds with `DeleteBudgetOutput`
* On failure, responds with `SdkError<DeleteBudgetError>`
### impl Client
#### pub fn delete_budget_action(&self) -> DeleteBudgetActionFluentBuilder
Constructs a fluent builder for the `DeleteBudgetAction` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(impl Into<String>)` / `set_action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
* On success, responds with `DeleteBudgetActionOutput` with field(s):
+ `account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action(Option<Action>)`: A budget action resource.
* On failure, responds with `SdkError<DeleteBudgetActionError>`
### impl Client
#### pub fn delete_notification(&self) -> DeleteNotificationFluentBuilder
Constructs a fluent builder for the `DeleteNotification` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose notification you want to delete.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose notification you want to delete.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification that you want to delete.
* On success, responds with `DeleteNotificationOutput`
* On failure, responds with `SdkError<DeleteNotificationError>`
### impl Client
#### pub fn delete_subscriber(&self) -> DeleteSubscriberFluentBuilder
Constructs a fluent builder for the `DeleteSubscriber` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose subscriber you want to delete.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose subscriber you want to delete.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification whose subscriber you want to delete.
+ `subscriber(Subscriber)` / `set_subscriber(Option<Subscriber>)`: The subscriber that you want to delete.
* On success, responds with `DeleteSubscriberOutput`
* On failure, responds with `SdkError<DeleteSubscriberError>`
### impl Client
#### pub fn describe_budget(&self) -> DescribeBudgetFluentBuilder
Constructs a fluent builder for the `DescribeBudget` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget that you want a description of.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget that you want a description of.
* On success, responds with `DescribeBudgetOutput` with field(s):
+ `budget(Option<Budget>)`: The description of the budget.
* On failure, responds with `SdkError<DescribeBudgetError>`
### impl Client
#### pub fn describe_budget_action(&self) -> DescribeBudgetActionFluentBuilder
Constructs a fluent builder for the `DescribeBudgetAction` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(impl Into<String>)` / `set_action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
* On success, responds with `DescribeBudgetActionOutput` with field(s):
+ `account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action(Option<Action>)`: A budget action resource.
* On failure, responds with `SdkError<DescribeBudgetActionError>`
### impl Client
#### pub fn describe_budget_action_histories(
&self
) -> DescribeBudgetActionHistoriesFluentBuilder
Constructs a fluent builder for the `DescribeBudgetActionHistories` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(impl Into<String>)` / `set_action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
+ `time_period(TimePeriod)` / `set_time_period(Option<TimePeriod>)`: The period of time that’s covered by a budget. The period has a start date and an end date. The start date must come before the end date. There are no restrictions on the end date.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many entries a paginated response contains. The maximum is 100.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A generic string.
* On success, responds with `DescribeBudgetActionHistoriesOutput` with field(s):
+ `action_histories(Option<Vec<ActionHistory>>)`: The historical record of the budget action resource.
+ `next_token(Option<String>)`: A generic string.
* On failure, responds with `SdkError<DescribeBudgetActionHistoriesError>`
### impl Client
#### pub fn describe_budget_actions_for_account(
&self
) -> DescribeBudgetActionsForAccountFluentBuilder
Constructs a fluent builder for the `DescribeBudgetActionsForAccount` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many entries a paginated response contains. The maximum is 100.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A generic string.
* On success, responds with `DescribeBudgetActionsForAccountOutput` with field(s):
+ `actions(Option<Vec<Action>>)`: A list of the budget action resources information.
+ `next_token(Option<String>)`: A generic string.
* On failure, responds with `SdkError<DescribeBudgetActionsForAccountError>`
### impl Client
#### pub fn describe_budget_actions_for_budget(
&self
) -> DescribeBudgetActionsForBudgetFluentBuilder
Constructs a fluent builder for the `DescribeBudgetActionsForBudget` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many entries a paginated response contains. The maximum is 100.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A generic string.
* On success, responds with `DescribeBudgetActionsForBudgetOutput` with field(s):
+ `actions(Option<Vec<Action>>)`: A list of the budget action resources information.
+ `next_token(Option<String>)`: A generic string.
* On failure, responds with `SdkError<DescribeBudgetActionsForBudgetError>`
### impl Client
#### pub fn describe_budget_notifications_for_account(
&self
) -> DescribeBudgetNotificationsForAccountFluentBuilder
Constructs a fluent builder for the `DescribeBudgetNotificationsForAccount` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many budgets a paginated response contains. The default is 50.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A generic string.
* On success, responds with `DescribeBudgetNotificationsForAccountOutput` with field(s):
+ `budget_notifications_for_account(Option<Vec<BudgetNotificationsForAccount>>)`: A list of budget names and associated notifications for an account.
+ `next_token(Option<String>)`: A generic string.
* On failure, responds with `SdkError<DescribeBudgetNotificationsForAccountError>`
### impl Client
#### pub fn describe_budget_performance_history(
&self
) -> DescribeBudgetPerformanceHistoryFluentBuilder
Constructs a fluent builder for the `DescribeBudgetPerformanceHistory` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `time_period(TimePeriod)` / `set_time_period(Option<TimePeriod>)`: Retrieves how often the budget went into an `ALARM` state for the specified time period.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many entries a paginated response contains. The maximum is 100.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: A generic string.
* On success, responds with `DescribeBudgetPerformanceHistoryOutput` with field(s):
+ `budget_performance_history(Option<BudgetPerformanceHistory>)`: The history of how often the budget has gone into an `ALARM` state.
For `DAILY` budgets, the history saves the state of the budget for the last 60 days. For `MONTHLY` budgets, the history saves the state of the budget for the current month plus the last 12 months. For `QUARTERLY` budgets, the history saves the state of the budget for the last four quarters.
+ `next_token(Option<String>)`: A generic string.
* On failure, responds with `SdkError<DescribeBudgetPerformanceHistoryError>`
### impl Client
#### pub fn describe_budgets(&self) -> DescribeBudgetsFluentBuilder
Constructs a fluent builder for the `DescribeBudgets` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budgets that you want to describe.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An integer that represents how many budgets a paginated response contains. The default is 100.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The pagination token that you include in your request to indicate the next set of results that you want to retrieve.
* On success, responds with `DescribeBudgetsOutput` with field(s):
+ `budgets(Option<Vec<Budget>>)`: A list of budgets.
+ `next_token(Option<String>)`: The pagination token in the service response that indicates the next set of results that you can retrieve.
* On failure, responds with `SdkError<DescribeBudgetsError>`
### impl Client
#### pub fn describe_notifications_for_budget(
&self
) -> DescribeNotificationsForBudgetFluentBuilder
Constructs a fluent builder for the `DescribeNotificationsForBudget` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose notifications you want descriptions of.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose notifications you want descriptions of.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An optional integer that represents how many entries a paginated response contains.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The pagination token that you include in your request to indicate the next set of results that you want to retrieve.
* On success, responds with `DescribeNotificationsForBudgetOutput` with field(s):
+ `notifications(Option<Vec<Notification>>)`: A list of notifications that are associated with a budget.
+ `next_token(Option<String>)`: The pagination token in the service response that indicates the next set of results that you can retrieve.
* On failure, responds with `SdkError<DescribeNotificationsForBudgetError>`
### impl Client
#### pub fn describe_subscribers_for_notification(
&self
) -> DescribeSubscribersForNotificationFluentBuilder
Constructs a fluent builder for the `DescribeSubscribersForNotification` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose subscribers you want descriptions of.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose subscribers you want descriptions of.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification whose subscribers you want to list.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: An optional integer that represents how many entries a paginated response contains.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The pagination token that you include in your request to indicate the next set of results that you want to retrieve.
* On success, responds with `DescribeSubscribersForNotificationOutput` with field(s):
+ `subscribers(Option<Vec<Subscriber>>)`: A list of subscribers that are associated with a notification.
+ `next_token(Option<String>)`: The pagination token in the service response that indicates the next set of results that you can retrieve.
* On failure, responds with `SdkError<DescribeSubscribersForNotificationError>`
### impl Client
#### pub fn execute_budget_action(&self) -> ExecuteBudgetActionFluentBuilder
Constructs a fluent builder for the `ExecuteBudgetAction` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(impl Into<String>)` / `set_action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
+ `execution_type(ExecutionType)` / `set_execution_type(Option<ExecutionType>)`: The type of execution.
* On success, responds with `ExecuteBudgetActionOutput` with field(s):
+ `account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
+ `execution_type(Option<ExecutionType>)`: The type of execution.
* On failure, responds with `SdkError<ExecuteBudgetActionError>`
### impl Client
#### pub fn update_budget(&self) -> UpdateBudgetFluentBuilder
Constructs a fluent builder for the `UpdateBudget` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget that you want to update.
+ `new_budget(Budget)` / `set_new_budget(Option<Budget>)`: The budget that you want to update your budget to.
* On success, responds with `UpdateBudgetOutput`
* On failure, responds with `SdkError<UpdateBudgetError>`
### impl Client
#### pub fn update_budget_action(&self) -> UpdateBudgetActionFluentBuilder
Constructs a fluent builder for the `UpdateBudgetAction` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `action_id(impl Into<String>)` / `set_action_id(Option<String>)`: A system-generated universally unique identifier (UUID) for the action.
+ `notification_type(NotificationType)` / `set_notification_type(Option<NotificationType>)`: The type of a notification. It must be ACTUAL or FORECASTED.
+ `action_threshold(ActionThreshold)` / `set_action_threshold(Option<ActionThreshold>)`: The trigger threshold of the action.
+ `definition(Definition)` / `set_definition(Option<Definition>)`: Specifies all of the type-specific parameters.
+ `execution_role_arn(impl Into<String>)` / `set_execution_role_arn(Option<String>)`: The role passed for action execution and reversion. Roles and actions must be in the same account.
+ `approval_model(ApprovalModel)` / `set_approval_model(Option<ApprovalModel>)`: This specifies if the action needs manual or automatic approval.
+ `subscribers(Subscriber)` / `set_subscribers(Option<Vec<Subscriber>>)`: A list of subscribers.
* On success, responds with `UpdateBudgetActionOutput` with field(s):
+ `account_id(Option<String>)`: The account ID of the user. It’s a 12-digit number.
+ `budget_name(Option<String>)`: A string that represents the budget name. The “:” and “" characters, and the “/action/” substring, aren’t allowed.
+ `old_action(Option<Action>)`: The previous action resource information.
+ `new_action(Option<Action>)`: The updated action resource information.
* On failure, responds with `SdkError<UpdateBudgetActionError>`
### impl Client
#### pub fn update_notification(&self) -> UpdateNotificationFluentBuilder
Constructs a fluent builder for the `UpdateNotification` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose notification you want to update.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose notification you want to update.
+ `old_notification(Notification)` / `set_old_notification(Option<Notification>)`: The previous notification that is associated with a budget.
+ `new_notification(Notification)` / `set_new_notification(Option<Notification>)`: The updated notification to be associated with a budget.
* On success, responds with `UpdateNotificationOutput`
* On failure, responds with `SdkError<UpdateNotificationError>`
### impl Client
#### pub fn update_subscriber(&self) -> UpdateSubscriberFluentBuilder
Constructs a fluent builder for the `UpdateSubscriber` operation.
* The fluent builder is configurable:
+ `account_id(impl Into<String>)` / `set_account_id(Option<String>)`: The `accountId` that is associated with the budget whose subscriber you want to update.
+ `budget_name(impl Into<String>)` / `set_budget_name(Option<String>)`: The name of the budget whose subscriber you want to update.
+ `notification(Notification)` / `set_notification(Option<Notification>)`: The notification whose subscriber you want to update.
+ `old_subscriber(Subscriber)` / `set_old_subscriber(Option<Subscriber>)`: The previous subscriber that is associated with a budget notification.
+ `new_subscriber(Subscriber)` / `set_new_subscriber(Option<Subscriber>)`: The updated subscriber that is associated with a budget notification.
* On success, responds with `UpdateSubscriberOutput`
* On failure, responds with `SdkError<UpdateSubscriberError>`
### impl Client
#### pub fn from_conf(conf: Config) -> Self
Creates a new client from the service `Config`.
##### Panics
This method will panic if the `conf` has retry or timeouts enabled without a `sleep_impl`.
If you experience this panic, it can be fixed by setting the `sleep_impl`, or by disabling retries and timeouts.
#### pub fn config(&self) -> &Config
Returns the client’s configuration.
### impl Client
#### pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
##### Panics
* This method will panic if the `sdk_config` is missing an async sleep implementation. If you experience this panic, set the `sleep_impl` on the Config passed into this function to fix it.
* This method will panic if the `sdk_config` is missing an HTTP connector. If you experience this panic, set the
`http_connector` on the Config passed into this function to fix it.
Trait Implementations
---
### impl Clone for Client
#### fn clone(&self) -> Client
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Client
### impl Send for Client
### impl Sync for Client
### impl Unpin for Client
### impl !UnwindSafe for Client
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Type Alias aws_sdk_budgets::error::SdkError
===
```
pub type SdkError<E, R = HttpResponse> = SdkError<E, R>;
```
Error type returned by the client.
Aliased Type
---
```
enum SdkError<E, R = HttpResponse> {
ConstructionFailure(ConstructionFailure),
TimeoutError(TimeoutError),
DispatchFailure(DispatchFailure),
ResponseError(ResponseError<R>),
ServiceError(ServiceError<E, R>),
}
```
Variants
---
### ConstructionFailure(ConstructionFailure)
The request failed during construction. It was not dispatched over the network.
### TimeoutError(TimeoutError)
The request failed due to a timeout. The request MAY have been sent and received.
### DispatchFailure(DispatchFailure)
The request failed during dispatch. An HTTP response was not received. The request MAY have been sent.
### ResponseError(ResponseError<R>)
A response was received but it was not parseable according the the protocol (for example the server hung up without sending a complete response)
### ServiceError(ServiceError<E, R>)
An error response was received from the service
Trait Implementations
---
### impl<E, R> ProvideErrorMetadata for SdkError<E, R>where
E: ProvideErrorMetadata,
#### fn meta(&self) -> &ErrorMetadata
Returns error metadata, which includes the error code, message,
request ID, and potentially additional information.#### fn code(&self) -> Option<&strReturns the error code if it’s available.#### fn message(&self) -> Option<&strReturns the error message, if there is one.### impl<E, R> RequestId for SdkError<E, R>where
R: HttpHeaders,
#### fn request_id(&self) -> Option<&strReturns the request ID, or `None` if the service could not be reached.
Module aws_sdk_budgets::types
===
Data structures used by operation inputs/outputs.
Modules
---
* buildersBuilders
* errorError types that AWS Budgets can respond with.
Structs
---
* ActionA budget action resource.
* ActionHistoryThe historical records for a budget action.
* ActionHistoryDetailsThe description of the details for the event.
* ActionThresholdThe trigger threshold of the action.
* AutoAdjustDataThe parameters that determine the budget amount for an auto-adjusting budget.
* BudgetRepresents the output of the `CreateBudget` operation. The content consists of the detailed metadata and data file information, and the current status of the `budget` object.
* BudgetNotificationsForAccount The budget name and associated notifications for an account.
* BudgetPerformanceHistoryA history of the state of a budget at the end of the budget's specified time period.
* BudgetedAndActualAmountsThe amount of cost or usage that you created the budget for, compared to your actual costs or usage.
* CalculatedSpendThe spend objects that are associated with this budget. The `actualSpend` tracks how much you've used, cost, usage, RI units, or Savings Plans units and the `forecastedSpend` tracks how much that you're predicted to spend based on your historical usage profile.
* CostTypesThe types of cost that are included in a `COST` budget, such as tax and subscriptions.
* DefinitionSpecifies all of the type-specific parameters.
* HistoricalOptionsThe parameters that define or describe the historical data that your auto-adjusting budget is based on.
* IamActionDefinitionThe Identity and Access Management (IAM) action definition details.
* NotificationA notification that's associated with a budget. A budget can have up to ten notifications.
* NotificationWithSubscribersA notification with subscribers. A notification can have one SNS subscriber and up to 10 email subscribers, for a total of 11 subscribers.
* ScpActionDefinitionThe service control policies (SCP) action definition details.
* SpendThe amount of cost or usage that's measured for a budget.
* SsmActionDefinitionThe Amazon Web Services Systems Manager (SSM) action definition details.
* SubscriberThe subscriber to a budget notification. The subscriber consists of a subscription type and either an Amazon SNS topic or an email address.
* TimePeriodThe period of time that's covered by a budget. The period has a start date and an end date. The start date must come before the end date. There are no restrictions on the end date.
Enums
---
* ActionStatusWhen writing a match expression against `ActionStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ActionSubTypeWhen writing a match expression against `ActionSubType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ActionTypeWhen writing a match expression against `ActionType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ApprovalModelWhen writing a match expression against `ApprovalModel`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* AutoAdjustTypeWhen writing a match expression against `AutoAdjustType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* BudgetTypeWhen writing a match expression against `BudgetType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ComparisonOperatorWhen writing a match expression against `ComparisonOperator`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* EventTypeWhen writing a match expression against `EventType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ExecutionTypeWhen writing a match expression against `ExecutionType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* NotificationStateWhen writing a match expression against `NotificationState`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* NotificationTypeWhen writing a match expression against `NotificationType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* SubscriptionTypeWhen writing a match expression against `SubscriptionType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ThresholdTypeWhen writing a match expression against `ThresholdType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* TimeUnitWhen writing a match expression against `TimeUnit`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
Module aws_sdk_budgets::primitives
===
Primitives such as `Blob` or `DateTime` used by other types.
Structs
---
* DateTimeDateTime in time.
* UnknownVariantValueOpaque struct used as inner data for the `Unknown` variant defined in enums in the crate
Enums
---
* DateTimeFormatFormats for representing a `DateTime` in the Smithy protocols.
Struct aws_sdk_budgets::Config
===
```
pub struct Config { /* private fields */ }
```
Configuration for a aws_sdk_budgets service client.
Service configuration allows for customization of endpoints, region, credentials providers,
and retry configuration. Generally, it is constructed automatically for you from a shared configuration loaded by the `aws-config` crate. For example:
```
// Load a shared config from the environment let shared_config = aws_config::from_env().load().await;
// The client constructor automatically converts the shared config into the service config let client = Client::new(&shared_config);
```
The service config can also be constructed manually using its builder.
Implementations
---
### impl Config
#### pub fn builder() -> Builder
Constructs a config builder.
#### pub fn to_builder(&self) -> Builder
Converts this config back into a builder so that it can be tweaked.
#### pub fn http_connector(&self) -> Option<SharedHttpConnectorReturn the `SharedHttpConnector` to use when making requests, if any.
#### pub fn endpoint_resolver(&self) -> SharedEndpointResolver
Returns the endpoint resolver.
#### pub fn retry_config(&self) -> Option<&RetryConfigReturn a reference to the retry configuration contained in this config, if any.
#### pub fn sleep_impl(&self) -> Option<SharedAsyncSleepReturn a cloned shared async sleep implementation from this config, if any.
#### pub fn timeout_config(&self) -> Option<&TimeoutConfigReturn a reference to the timeout configuration contained in this config, if any.
#### pub fn interceptors(&self) -> impl Iterator<Item = SharedInterceptor> + '_
Returns interceptors currently registered by the user.
#### pub fn time_source(&self) -> Option<SharedTimeSourceReturn time source used for this service.
#### pub fn app_name(&self) -> Option<&AppNameReturns the name of the app that is using the client, if it was provided.
This *optional* name is used to identify the application in the user agent that gets sent along with requests.
#### pub fn invocation_id_generator(&self) -> Option<SharedInvocationIdGeneratorReturns the invocation ID generator if one was given in config.
The invocation ID generator generates ID values for the `amz-sdk-invocation-id` header. By default, this will be a random UUID. Overriding it may be useful in tests that examine the HTTP request and need to be deterministic.
#### pub fn new(config: &SdkConfig) -> Self
Creates a new service config from a shared `config`.
#### pub fn signing_service(&self) -> &'static str
The signature version 4 service signing name to use in the credential scope when signing requests.
The signing service may be overridden by the `Endpoint`, or by specifying a custom
`SigningService` during operation construction
#### pub fn region(&self) -> Option<&RegionReturns the AWS region, if it was provided.
#### pub fn credentials_cache(&self) -> Option<SharedCredentialsCacheReturns the credentials cache.
Trait Implementations
---
### impl Clone for Config
#### fn clone(&self) -> Config
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn from(sdk_config: &SdkConfig) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Config
### impl Send for Config
### impl Sync for Config
### impl Unpin for Config
### impl !UnwindSafe for Config
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module aws_sdk_budgets::config
===
Configuration for AWS Budgets.
Modules
---
* endpointTypes needed to configure endpoint resolution.
* interceptorsTypes needed to implement `Interceptor`.
* retryRetry configuration.
* timeoutTimeout configuration.
Structs
---
* AppNameApp name that can be configured with an AWS SDK client to become part of the user agent string.
* BuilderBuilder for creating a `Config`.
* ConfigConfiguration for a aws_sdk_budgets service client.
* ConfigBagLayered configuration structure
* CredentialsAWS SDK Credentials
* RegionThe region to send requests to.
* RuntimeComponentsComponents that can only be set in runtime plugins that the orchestrator uses directly to call an operation.
* SharedAsyncSleepWrapper type for sharable `AsyncSleep`
* SharedInterceptorInterceptor wrapper that may be shared
* SleepFuture returned by `AsyncSleep`.
Traits
---
* AsyncSleepAsync trait with a `sleep` function.
* InterceptorAn interceptor allows injecting code into the SDK ’s request execution pipeline.
Module aws_sdk_budgets::operation
===
All operations that this crate can perform.
Modules
---
* create_budgetTypes for the `CreateBudget` operation.
* create_budget_actionTypes for the `CreateBudgetAction` operation.
* create_notificationTypes for the `CreateNotification` operation.
* create_subscriberTypes for the `CreateSubscriber` operation.
* delete_budgetTypes for the `DeleteBudget` operation.
* delete_budget_actionTypes for the `DeleteBudgetAction` operation.
* delete_notificationTypes for the `DeleteNotification` operation.
* delete_subscriberTypes for the `DeleteSubscriber` operation.
* describe_budgetTypes for the `DescribeBudget` operation.
* describe_budget_actionTypes for the `DescribeBudgetAction` operation.
* describe_budget_action_historiesTypes for the `DescribeBudgetActionHistories` operation.
* describe_budget_actions_for_accountTypes for the `DescribeBudgetActionsForAccount` operation.
* describe_budget_actions_for_budgetTypes for the `DescribeBudgetActionsForBudget` operation.
* describe_budget_notifications_for_accountTypes for the `DescribeBudgetNotificationsForAccount` operation.
* describe_budget_performance_historyTypes for the `DescribeBudgetPerformanceHistory` operation.
* describe_budgetsTypes for the `DescribeBudgets` operation.
* describe_notifications_for_budgetTypes for the `DescribeNotificationsForBudget` operation.
* describe_subscribers_for_notificationTypes for the `DescribeSubscribersForNotification` operation.
* execute_budget_actionTypes for the `ExecuteBudgetAction` operation.
* update_budgetTypes for the `UpdateBudget` operation.
* update_budget_actionTypes for the `UpdateBudgetAction` operation.
* update_notificationTypes for the `UpdateNotification` operation.
* update_subscriberTypes for the `UpdateSubscriber` operation.
Traits
---
* RequestIdImplementers add a function to return an AWS request ID
Enum aws_sdk_budgets::Error
===
```
#[non_exhaustive]pub enum Error {
AccessDeniedException(AccessDeniedException),
CreationLimitExceededException(CreationLimitExceededException),
DuplicateRecordException(DuplicateRecordException),
ExpiredNextTokenException(ExpiredNextTokenException),
InternalErrorException(InternalErrorException),
InvalidNextTokenException(InvalidNextTokenException),
InvalidParameterException(InvalidParameterException),
NotFoundException(NotFoundException),
ResourceLockedException(ResourceLockedException),
ThrottlingException(ThrottlingException),
Unhandled(Unhandled),
}
```
All possible error types for this service.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### AccessDeniedException(AccessDeniedException)
You are not authorized to use this operation with the given parameters.
### CreationLimitExceededException(CreationLimitExceededException)
You've exceeded the notification or subscriber limit.
### DuplicateRecordException(DuplicateRecordException)
The budget name already exists. Budget names must be unique within an account.
### ExpiredNextTokenException(ExpiredNextTokenException)
The pagination token expired.
### InternalErrorException(InternalErrorException)
An error on the server occurred during the processing of your request. Try again later.
### InvalidNextTokenException(InvalidNextTokenException)
The pagination token is invalid.
### InvalidParameterException(InvalidParameterException)
An error on the client occurred. Typically, the cause is an invalid input value.
### NotFoundException(NotFoundException)
We can’t locate the resource that you specified.
### ResourceLockedException(ResourceLockedException)
The request was received and recognized by the server, but the server rejected that particular method for the requested resource.
### ThrottlingException(ThrottlingException)
The number of API requests has exceeded the maximum allowed API request throttling limit for the account.
### Unhandled(Unhandled)
An unexpected error occurred (e.g., invalid JSON returned by the service or an unknown error code).
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(err: CreateBudgetActionError) -> Self
Converts to this type from the input type.### impl From<CreateBudgetError> for Error
#### fn from(err: CreateBudgetError) -> Self
Converts to this type from the input type.### impl From<CreateNotificationError> for Error
#### fn from(err: CreateNotificationError) -> Self
Converts to this type from the input type.### impl From<CreateSubscriberError> for Error
#### fn from(err: CreateSubscriberError) -> Self
Converts to this type from the input type.### impl From<DeleteBudgetActionError> for Error
#### fn from(err: DeleteBudgetActionError) -> Self
Converts to this type from the input type.### impl From<DeleteBudgetError> for Error
#### fn from(err: DeleteBudgetError) -> Self
Converts to this type from the input type.### impl From<DeleteNotificationError> for Error
#### fn from(err: DeleteNotificationError) -> Self
Converts to this type from the input type.### impl From<DeleteSubscriberError> for Error
#### fn from(err: DeleteSubscriberError) -> Self
Converts to this type from the input type.### impl From<DescribeBudgetActionError> for Error
#### fn from(err: DescribeBudgetActionError) -> Self
Converts to this type from the input type.### impl From<DescribeBudgetActionHistoriesError> for Error
#### fn from(err: DescribeBudgetActionHistoriesError) -> Self
Converts to this type from the input type.### impl From<DescribeBudgetActionsForAccountError> for Error
#### fn from(err: DescribeBudgetActionsForAccountError) -> Self
Converts to this type from the input type.### impl From<DescribeBudgetActionsForBudgetError> for Error
#### fn from(err: DescribeBudgetActionsForBudgetError) -> Self
Converts to this type from the input type.### impl From<DescribeBudgetError> for Error
#### fn from(err: DescribeBudgetError) -> Self
Converts to this type from the input type.### impl From<DescribeBudgetNotificationsForAccountError> for Error
#### fn from(err: DescribeBudgetNotificationsForAccountError) -> Self
Converts to this type from the input type.### impl From<DescribeBudgetPerformanceHistoryError> for Error
#### fn from(err: DescribeBudgetPerformanceHistoryError) -> Self
Converts to this type from the input type.### impl From<DescribeBudgetsError> for Error
#### fn from(err: DescribeBudgetsError) -> Self
Converts to this type from the input type.### impl From<DescribeNotificationsForBudgetError> for Error
#### fn from(err: DescribeNotificationsForBudgetError) -> Self
Converts to this type from the input type.### impl From<DescribeSubscribersForNotificationError> for Error
#### fn from(err: DescribeSubscribersForNotificationError) -> Self
Converts to this type from the input type.### impl From<ExecuteBudgetActionError> for Error
#### fn from(err: ExecuteBudgetActionError) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateBudgetActionError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateBudgetActionError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateBudgetError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateBudgetError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateNotificationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateNotificationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateSubscriberError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateSubscriberError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteBudgetActionError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteBudgetActionError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteBudgetError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteBudgetError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteNotificationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteNotificationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteSubscriberError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteSubscriberError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeBudgetActionError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeBudgetActionError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeBudgetActionHistoriesError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeBudgetActionHistoriesError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeBudgetActionsForAccountError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeBudgetActionsForAccountError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeBudgetActionsForBudgetError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeBudgetActionsForBudgetError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeBudgetError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeBudgetError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeBudgetNotificationsForAccountError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeBudgetNotificationsForAccountError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeBudgetPerformanceHistoryError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeBudgetPerformanceHistoryError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeBudgetsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeBudgetsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeNotificationsForBudgetError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeNotificationsForBudgetError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeSubscribersForNotificationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeSubscribersForNotificationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ExecuteBudgetActionError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ExecuteBudgetActionError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateBudgetActionError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateBudgetActionError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateBudgetError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateBudgetError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateNotificationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateNotificationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateSubscriberError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateSubscriberError, R>) -> Self
Converts to this type from the input type.### impl From<UpdateBudgetActionError> for Error
#### fn from(err: UpdateBudgetActionError) -> Self
Converts to this type from the input type.### impl From<UpdateBudgetError> for Error
#### fn from(err: UpdateBudgetError) -> Self
Converts to this type from the input type.### impl From<UpdateNotificationError> for Error
#### fn from(err: UpdateNotificationError) -> Self
Converts to this type from the input type.### impl From<UpdateSubscriberError> for Error
#### fn from(err: UpdateSubscriberError) -> Self
Converts to this type from the input type.### impl RequestId for Error
#### fn request_id(&self) -> Option<&strReturns the request ID, or `None` if the service could not be reached.Auto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl Send for Error
### impl Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module aws_sdk_budgets::client
===
Client for calling AWS Budgets.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_budgets::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_budgets::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `CreateBudget` operation has a `Client::create_budget`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.create_budget()
.account_id("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Modules
---
* customizeOperation customization and supporting types.
Structs
---
* ClientClient for AWS Budgets
Module aws_sdk_budgets::error
===
Common errors and error handling utilities.
Structs
---
* DisplayErrorContextProvides a `Display` impl for an `Error` that outputs the full error context
Traits
---
* ProvideErrorMetadataTrait to retrieve error metadata from a result
Type Aliases
---
* BoxErrorA boxed error that is `Send` and `Sync`.
* SdkErrorError type returned by the client.
Module aws_sdk_budgets::meta
===
Information about this crate.
Statics
---
* PKG_VERSIONCrate version number. |
kataw | npm | JavaScript | Kataw
===
An insane fast Javascript toolchain.
**WIP**
Kataw is a JavaScript toolchain that aim to unify functionality that has previously been separate tools. It features everything from low-level CST manipulation to tools like linting, code analyzes, transform, and minification.
* [CST nodes](#CSTnodes)
* [CST keywords](#CSTkeywords)
* [ESNext](#ESNext)
* [Diagnostics](#Diagnostics)
+ [Diagnostic arguments](#DiagnosticArgs)
* [Types](#Types)
* [Comments](#Comments)
* [Transformation](#Transformation)
* [Printing](#CSTprinting)
+ [Ignore comment](#Ignorecomment)
* [CST parser features](#CSTparserfeatures)
* [Current state](#Currentstate)
* [Roadmap](#Roadmap)
+ [📌v0.1](#v0.1)
+ [v0.2](#v0.2)
+ [v0.3](#v0.3)
+ [v1.0](#v1.0)
* [Future](#Future)
The toolchain's core is based upon a ECMAScript friendly CST that allows you to parse `ECMAScript® 2022 (ECMA-262 12th Edition) language specification`.
If the only goal is to perform syntactic analysis (parsing) of a Javascript program, you can do this with either `kataw.parseModule` or `kataw.parseScript`.
> Noted that with `ES2015` and later a Javascript program can be either a [script or a module](https://tc39.es/ecma262/index.html#sec-ecmascript-language-scripts-and-modules).
Here is an example on how to set up `Kataw` to act like for example `Acorn`:
```
// Parse with module goal
kataw.parseModule('x = y', { next: true }, function(source, kind, msg, line, column) {
throw msg + '(' + line + ', ' + column + ')';
});
// Parse in script mode
kataw.parseScript('x = y', { next: true }, function(source, kind, msg, line, column) {
throw msg + '(' + line + ', ' + column + ')';
});
```
The returned CST tree can now be used as an AST.
> Note that the CST contains more information that can be extracted from the CST node's through
> [public API methods](https://github.com/kataw/kataw/tree/main/src/parser#public-api-methods-to-extract-info-from-cst-nodes).
Many of these APIs have the advantage that they allow you to "retrieve" info that is not otherwise available with a standard AST parser.
One example is that you only need to use `kataw.isStatementNode` to find out if the current CST node is a statement node. With an AST parser you must use a `switch statement` with 60 `switch cases`.
```
// With Babel you are forced to do
switch(node.type) {
case 'SwitchStatement': ...
case 'ReturnStatement': ...
}
// With Kataw
kataw.isStatementNode(node); // return 'true'
```
A second benefit with this CST parser is that it is running in `recovery mode` by `default` and can be used in any editor. A build-in diagnostic system reports diagnostics if an `error handler` have been used. The diagnostics are dynamic. It means all the diagnostics are informative, and they will change based on the context you are parsing in.
These features used together gives you more options to adjust, modify and customize the CST tree compared to a regular AST parser and you can also write fewer code lines and at the same time experience insane performance.
CST nodes
---
All CST nodes has a `kind` which is a number that represents the node type. It's identical to `ESTree` type with the exception that Kataw doesn't do any string comparisons - everything in Kataw is a number.
Here is an example:
```
if (node.kind === Kataw.SyntaxKind.Identifier) {}
```
You need to use `kataw.visitEachChild` to traverse the CST tree to get access to each CST node. After that you do any kind of transformation.
Be aware that also the `kind` contain some additional information that you can extract through the public API - not only the `NodeFlags`.
For example `Kataw.isKeyword`, `Kataw.isIdentifier`, and `Kataw.isFutureReserved`.
This is made possible because there are no `token` in Kataw. Everything is a `SyntaxKind` - `token` and `kind` merged into one.
Kataw also exports all CST nodes so you can create your own nodes. This is handy if you want to try out new `ECMA` features that isn't part of the language yet, or make your own transformers as in `Babel`.
Here is an example on how to create an CST node:
```
// creates an identifier
kataw.createIdentifier(/* text */ 'hello', /* rawText */ 'hello', /* start */ 1, /* end */ 5)
```
Some CST nodes needes additional info. This can be set using the `Kataw.NodeFlags` andt this bitwise mask can be set on every CST node and CST keyword node.
```
// creates an string literal
const str = kataw.createStringLiteral(
/* text */ 'hello', /* rawText */ 'hello', /* start */ 1, /* end */ 5
);
// set the flag and mark it as a single quote. E.g. 'string'
str.flag |= Kataw.NodeFlags.SingleQuote.
// Check if the flag is set
kataw.isSingleQuote(str); // true
```
CST keywords
---
All keywords in Kataw is it's own CST node, and you create them in almost the same way as any other CST nodes.
```
kataw.createToken(kataw.SyntaxKind.ForKeyword, Kataw.NodeFlags.NoChildren, /* start */ 1, /* end */ 5);
```
Diagnostics
---
Diagnostics in Kataw can either be `error`, `warning` or `lint failure`.
The diagnostics have been designed like this so you can quickly understand what the problem is and correct it.
Adding a error handler as the 3rd argument will enable diagnostics. The diagnostics are flexible and let you use them together with Kataw's own reporters or you can create your own reporter or whatever is your use case.
Here is how it works:
```
import { parseScript } from 'kataw';
parseScript('[x', { next: true }, function(diagnosticSource, kind, message, start, end) {
});
```
###
Diagnostic arguments
| Param | Description |
| --- | --- |
| `diagnosticSource` | Is either `Lexer` or `Printer`. |
| `kind` | Is either `lint`, `error`, `warning` |
| `message` | The diagnostic message |
| `start` | The start position of the diagnostics |
| `end` | The end position of the diagnostics |
ESNext
---
`Stage 3` proposals can be parsed if the `next` options are enabled.
`Stage 1` and `stage 2` proposals are not supported because the specs drafts are changing all the time.
Types
---
Kataw has it's own type system that is an improvement over `Typescript` and `Flow`, and it conform to the `ECMAScript® 2022 (ECMA-262 12th Edition) language specification`.
As everything else - it's developed for high performance and it consumes less memory.
It allows you to parse syntax like `function x(y: string, z: number): string | number {}` and other similiar syntax.
The type system is still `WIP` and will be enabled by default in the `CLI` together with Kataw's own type checker.
You can manually enable this if you enable the `allowTypes` option. It will then parse the types but it will not do any type checking.
You can use `kataw.removeKatawTypes` to remove Kataw's types from the CST tree
```
const source = kataw.parseModule('let: string', { allowTypes: true});
// Remove the types kataw.removeKatawTypes(source);
```
Comments
---
Leading and trailing comments can be extracted at correct position with `kataw.getLeadingComments` and `kataw.getTrailingComments`.
```
Hello
/* I'm a comment */
there!
```
Getting the trailing comment of `Hello` can be done like this `kataw.getTrailingComments(5, 24).` It get the comments from the end value of
`hello` until the start value of `there!`.
If you want a `1:1` copy of the actual source code, you can do a "*slice*" from the start value of `Hello` to the end value of `there!`.
Transformation
---
`Kataw` can act the same way as`Babel` and be a tool that helps you write code in the latest version of Javascript. This can be done with developing transformers to handle situations where your supported environments don't support certain features natively.
The compiler transform those features down to a supported version.
You have to use `kataw.visitEachChild` to traverse the CST tree. `kataw.visitNode`can be used to traverse a single node, and
`kataw.visitNodes` to visit an array of CST nodes. This API method should only be used on *lists*. CST nodes that is *known*
to contain an array. There are no need to use for example `Array.Array` to verify if it's an array.
Performance is maintained that way.
All CST nodes will be updated automatically if any changes has been detected.
Keywords can also be swapped around and the same with `AssignmentExpression`, `BinaryExpression`, `UnaryExpression` and
`UpdateExpression` operands. For example `!==` can be changed to `===`.
A `WithStatement` can be transformed into a `WhileStatement` simply by changing the value of the `TokenNode`.
The location of the CST node in the CST tree can also be changed if you change the values of `start` and `end` on the CST node.
Changing the `NodeFlags` allow you to change how the CST node should behave.
All this things gives a you better control over transformation of each CST node compared to `Babel` and `Rome`.
Here is an example on an simple transformer that will replace all identifiers with an `NumericLiteral`.
```
export function swapIdentifierWithNumeric(transform) {
return transformSourceFile;
function transformSourceFile(root) {
switch (node.kind) {
case kataw.NodeKind.Identifier:
return kataw.createNumericLiteral(
123,
"123",
kataw.NodeFlags.ExpressionNode | kataw.NodeFlags.NoChildren,
/* start */ 1,
/* end */ 3
);
default:
return kataw.visitEachChild(transform, root, visitor);
}
}
function visitor() {
switch (node.kind) {
default:
return kataw.visitEachChild(transform, node, visitor);
}
}
}
```
Printing
---
Kataw is adjustable and allows three different ways to print your source code.
The returned source does not include any extra parenthesis or unnecessary code.
The comments are 100% correct and they will be printed in the places you expect.
| API | Description |
| --- | --- |
| `print` | Prints *a given* CST tree and let you adjust the diagnostics and set your own parser options |
| `printModule` | Prints the source in module goal |
| `printScript` | Prints the source in script mode |
Here is an example:
```
// Print
kataw.print(kataw.parseModule('x = y', { next: true }, function(source, kind, msg, line, column) {
throw msg + '(' + line + ', ' + column + ')';
}));
// Print with module goal
kataw.printModule('x = y');
// Print in script mode
kataw.printScript('x = y');
```
###
Ignore comment
Statements, blocks and other code lines can be ignored in Kataw with a `// kataw-ignore` comment.
If set on a `WhileStatement` it will ignore the entire statement and the `BlockStatement`.
```
// kataw-ignore while (true) {}
```
You can use `kataw.shouldIgnoreNextNode(node);` to verify if the node should be ignored.
CST parser features
---
* Error recovery by default (*like Acorn loose*), but it reconstruct the CST tree correctly
* Optional error reporting (*require a callback as the parsers 3rd argument*)
* Dynamic error, hint and warning diagnostics (*depends on the context you are parsing in*)
* Public API methods to extract info from the CST nodes
* 100% correct comment extraction and attachment algorithm
* Can parse types and type annotations (*Kataw has it's own type system*)
* Can be used in any editors
* Scalable
* Performance
Current state
---
* The CST parser can be used in production
Roadmap
---
###
📌v0.1
* [x] Parsing ECMA 262(aka JavaScript), and the cst spec be stable
* [x] Test 262 passes
* [x] Printing API (like prettier API)
* [ ] //kataw-ignore(like //prettier-ignore)
* [ ] Command line interface (like prettier cli)
* [ ] Documentation & website
###
v0.2
* [ ] plugin system, to make it possible to support jsx/ts/flow...
* [ ] jsx plugin
* [ ] ts plugin
###
v0.3
* [ ] transformers: like babel
* [ ] minify: like uglify-js
* [ ] linter: like eslint
###
v1.0
Future
---
* A "hook system" for adding additional rules for the linter and the grammar checker will be published.
* Hooks to support experimental syntax and ECMA proposals in an sandboxed envirnonment
Readme
---
### Keywords
* lightweight
* es2021
* es2022
* ast
* performance
* parsing
* cst
* concrete syntax
* parser
* ecmascript
* javascript
* minify
* transform |
GITHUB_papers-we-love_papers-we-love.zip_unzipped_concatenative-programming-an-overlooked-paradigm.pdf | free_programming_book | Unknown | CONCATENATIVE PROGRAMMING An Overlooked Paradigm in Functional Programming Dominikus Herzberg Department of Software Engineering, Heilbronn University Max-Planck-Str. 39, 74081 Heilbronn, Germany <EMAIL> <NAME> School of Computing, Engineering & Information Sciences, Northumbria University Pandon Building, Camden Street, Newcastle Upon Tyne, United Kingdom <EMAIL> Keywords:
language-oriented programming, functional programming, concatenative languages Abstract:
Based on the state of our ongoing research into Language-Driven Software Development (LDSD) and Language-Oriented Programming (LOP) we argue that the yet relatively unknown paradigm of concatenative programming is valuable for fundamental software engineering research and might prove to be a suitable foundation for future programming. To be sound, we formally introduce Concat, our research prototype of a purely functional concatenative language. The simplicity of Concat is contrasted by its expressiveness and a richness of inspiring approaches. Concatenative languages contribute a fresh and different sight on functional programming, which might help tackle challenges in LDSD/LOP from a new viewpoint.
1 INTRODUCTION
One of our main themes of research is LanguageDriven Software Development (LDSD) and Language-Oriented Programming (LOP). It is about how the creation and use of languages might help us in building and engineering complex software systems. As a matter of fact, LDSD/LOP is a growing field of interest as is manifested by the research on Domain Specific Languages (DSLs), Model-Driven Development (MDD), generative software development and software factories, to name just a few areas.
In order to experiment with language layers and domain specific specializations and to test our conceptions and hypotheses, we made use of languages which are regarded as flexible and easily adaptable.
Among these languages were Lisp/Scheme, Prolog and Smalltalk. Their interactive nature and their latebinding features due to dynamic typing turned out to be helpful in the setting of a laboratory situation for language experimentation. Still, some experiments turned out to fail e.g. applying extreme refactoring or attempting to uncover hidden design intentions in code. We felt having something in our way; a problem we could neither clearly pinpoint nor sketch a solution for. There was something wrong with the languages we used.
When we made contact with so-called concatenative programming languages things began to fall into place. As a result, we developed our own concatenative language called Concat. We benefited a lot from using the concatenative paradigm and still do;
our work on Concat is research in progress. Concat is a language that is as simple as possible, but no simpler to paraphrase a quote attributed to <NAME> but still useful and practical.
Our claim is that concatenative languages are (a)
ideally suited for language experimentation and (b)
worth to be applied in software engineering because of its unique features.
The features that characterize and distinguish Concat in particular and concatenative languages in general are:
Concat is a functional language (no explicit states) with static types and type inference. A concatenative language can also be dynamically typed and work without type inference; some variants are also functionally impure
Concat is a language one can interactively work with on the console; we regard interactivity as essential for an experimental approach to LDSD and LOP
Accepted Paper Submission, 4th International Conference on Software and Data Technologies (ICSOFT),
26-29 July, 2009, Sofia, Bulgaria (http://www.icsoft.org)
Concat is homoiconic, i.e. code can be treated as data and data as code
Concat has a very simple syntax. Programs are created by concatenating words and so-called quotations, and there are just three tokens with special meaning: whitespace, [ and ]
Similarily, Concat has very simple semantics. We distinguish the level of words and quotations from the level of functions processing stacks
Both levels of Concat maintain a relationship called a homomorphism; that means that there is a structure preserving mapping from the syntactic level of words and quotations to the semantic level of functions and stacks There are immediate implications that follow from these characteristics: (1) Concat has a sound mathematical foundation, which enables formal treatment and reasoning over programs. (2) There are no variable bindings in Concat, that means there are no structural ties beyond the homomorphism mentioned. And that has two other important consequences especially for code engineering: (3) Concat supports macros out of the box without further ado. (4) One can cut out any fragment of code at whitespaces. Presumed that you leave the code within squared brackets intact, any such fragment still represents a valid program. This is something, which is impossible in, say, Java, C#, Lisp or Haskell. Concat enables code reuse and refactoring of code to an extent unknown in other languages.
We think that Concat offers many interesting properties. We formally define Concat in Sec. 3 after we have briefly touched upon related work in Sec. 2.
We hold the view that concatenative languages deserve much more attention than is the case. They are inspiring, usable and practical despite and because of their simplicity, see Sec. 4 a position surely debatable. We draw some conclusions in Sec. 5.
2 RELATED WORK Much of the foundational work on concatenative languages was done by <NAME> in conjunction with the development of the Joy language.1 Today,
several implementations of concatenative languages exist. Cat is a purely functional language that unlike Joy and like Concat supports static type checking.2 Factor is a programming language designed for use in practice. It has a concatenative core and supports object-oriented programming.3 1 http://www.latrobe.edu.au/philosophy/phimvt 2 http://www.cat-language.com 3 http://factorcode.org Concatenative languages are closely related to stack-based languages.4 The former are characterized by the homomorphic relationship between words/
quotations and functions, the latter by the use of a stack as the central concept in the execution model.
A language may be both stack-based and concatenative, but this must not necessarily be the case. Forth
(Rather et al., 1996) and PostScript (Adobe Systems Inc., 1999) are popular high-level stack-based languages that are not concatenative. Several assembly and intermediate languages also use a stack-based model of execution.
In a concatenative language, even those words that may intuitively be perceived as data, for example numbers and strings, denote functions. Thus, concatenative languages are not only functional in the sense that functions have no side effects, but also in the sense that everything is a function. This form of purity and the non-existence of variables relates them closely to function-level programming as defined in
(Backus, 1978) and the point-free style of functional programming (Gibbons, 1999).
3 FORMAL FOUNDATIONS In this section we will define the concatenative language Concat. Due to space limitations we restrict our presentation to a dynamically typed version of Concat. Actually, Concat is statically typed enabling the programmer to define arbitrary types as encodings.
A specialty of concatenative languages is that there is the level of words and quotations (Sec. 3.1 and 3.2) and the level of functions and stacks (Sec. 3.3 and 3.4). Both levels have their own concepts and their own semantics. However, the levels are constructed in such a way that there is a close relationship between the two (Sec. 3.5).
3.1 Words and Quotations On the syntactic level, Concat is defined by only some few concepts: vocabularies of words, quotations, stack pools, programs, concatenation and substitution.
Definition 3.1 (Vocabulary of Words) A vocabulary is a set of elements V = {w1 , w2 , . . . }; its elements are called words.
A quotation is recursively defined as:
Definition 3.2 (Quotation) Let [ ] be the empty quotation. Given a vocabulary V of words, a quotation 4 http://concatenative.org
q using V is a finite sequence of elements written as q = [s1 s2 s3 . . . ] with each element being either a word of V or a quotation using V .
Definition 3.3 (Stack Pool) Given a vocabulary V ,
the stack pool SV is the set of all possible quotations using V .
Definition 3.4 (Program) A program in a concatenative language is an element p of the program space PV , p PV SV .
Any program is a quotation but not every quotation is a (valid) program.
Definition 3.5 (Concatenation) The binary operation of concatenation : SV SV SV corresponds to list concatenation.
The concatenation of two programs results in a new program. The following properties hold with p, p0 , p00 SV :
p[ ] [ ] p p 0
00
(p p ) p p (p0 p00 ) p p0 p00 The first property declares the empty quotation as the neutral element of concatenation, the second property is the law of associativity. Programs and their concatenation constitute a monoid.
For the sake of a simpler notation, we replace the concatenation operator by a whitespace character,
and do not use the outer squared brackets for programs.
Definition 3.6 (Substitution Rule) Given a vocabulary V , a substitution rule r is a unique mapping from one program to another program: r : SVn SVm with m, n N\{0}.
Now we have everything together to define a generic substitution system that rewrites concatenations. The execution semantics are fairly simple.
Definition 3.7 (Substitution Evaluation) Given a sequence of substitution rules and a program,
substitution evaluation is defined as follows: walk through the sequence of substitution rules, rewrite the program if there is a match (probing from right to left!) and repeat this process until no more substitution rules apply.
Before we advance to the level of functions, we would like to provide some simple examples of substitution rules. The attentive reader might notice that the rules look very much like operators written in postfix position. This is for a good reason, which will become clear when we talk about the connection to the function level. The rightmost position on the left-hand side of a substitution rule almost always is a word. Substitutions essentially dispatch from right to left.
3.2 Examples of Substitution Rules Substitution rules have a left-hand side (LHS) and a right-hand side (RHS). Inside substitution rules, capital letters prefixed by a $, # or @ denote variables used for matching words and quotations on the LHS and for value replacement on the RHS. If prefixed by $,
the variable matches a single word only. If prefixed by
#, a single word or quotation is matched. If prefixed by @, any number of words or quotations is matched.
The following rule defines a swap operation. Remember that the concatenation operator is replaced by whitespace for improved readability and that the outer squared brackets are implicit:
#X #Y swap ==> #Y #X On the LHS #X and #Y match the two words or quotations preceding swap in a given concatenation.
On the RHS, the recognized words or quotations are inserted in their corresponding places. Take for example the concatenation 2 [ 3 4 ] swap, which is resolved by applying the above rule to [ 3 4 ] 2.
The following substitution rules might help get an idea how simple but powerful the substitution system is.
[ @REST #TOP ] call ==> @REST #TOP
[ @X ] [ @Y ] append ==> [ @X @Y ]
true [ @TRUE ] [ @FALSE ] if ==> [ @TRUE ] call false [ @TRUE ] [ @FALSE ] if ==> [ @FALSE ] call
#X dup ==> #X #X The first rule, call, calls a quotation by dequoting it i.e. by releasing the content of the quotation. Syntactically, this is achieved by removing the squared brackets. The second rule, append, takes two quotations (including empty quotations) and appends their content in a new quotation. The third and fourth rule define the behavior of if. If there is a true followed by two quotations, the quotation for truth is called; if false matches, the failure quotation is called. Apparently, quotations can be used to defer execution. The last rule simply duplicates a word or quotation.
Due to space limitations we cannot show nor prove that some few substitution rules suffice to have a Turing complete rewriting system. Substitution rules play the role macros have in other languages such as Lisp/Scheme.
3.3 Functions and Stacks The concepts on the semantic level of functions and stacks parallel the concepts on the syntactic level. On the one hand there are words, quotations and concatenations, on the other hand there are functions, quotation functions and function compositions. The empty
quotation is mapped on the identity function. We will discuss this structural similarity in Sec. 3.5.
Definition 3.8 (Pool of Stack Functions) Given a stack pool SV , a pool of stack functions F (SV , SV ) is the set of all stack functions f : SV SV .
Definition 3.9 (Quotation Function) Given a quotation q SV , the corresponding quotation function fq F (SV , SV ) is defined to be fq (s) s q append s SV A function that throws its representation as a word onto a stack is called constructor function or constructor for short.
Definition 3.10 (Function Composition) Given a stack pool SV , the composition of two functions f , g F (SV , SV ) with f : A B, g : B C and A, B,C SV is defined by the composite function g f : A C. The following properties hold with f , g, h F (SV , SV ):
f Id Id f f
(h g) f h (g f ) h g f That means that Id is the neutral element of function composition and that function composition is associative. Function composition constitutes a monoid as well.
From the above definition of function composition we can deduce the definition of the identity function:
Definition 3.11 (Identity Function) For any vocabulary V , the identity function Id F (SV , SV ) is defined as Id(s) s for all s SV .
We can now compute results with a given composition of stack functions and quotation functions.
Definition 3.12 (Function Evaluation) Given a stack pool SV , the evaluation of a function f F (SV , SV ) with f : A B and A, B SV is defined to be the application of f on some s A:
f (s).
3.4 Examples of Function Definitions In the context of functions, we refer to quotations as stacks. A function in Concat expects a stack and returns a stack. It will take some values from the input stack, do some computation and possibly leave some results on the input stack to be returned.
Function definitions look like substitution rules.
The rightmost position on the LHS is a word denoting the name of a function. Everything else that follows to the left are pattern matchers picking up words and quotations from the stack. The position next to the function name stands for the top of the stack, then comes the position underneath etc.
$X $Y + ==> #<(+ $X $Y)>#
In the example, $Y expects a word on top of the stack and picks it up; $X picks up the word underneath. If there are more words or quotations on the stack, they are left untouched. The RHS of a function definition says that the top two values on the input stack are replaced by a single new word or quotation,
which is the result of some computation enclosed in
#< and >#.
Within these delimiters a computation can be specified in any suitable language; we use Scheme in this example. Before the computation is executed, the items picked up by $X and $Y are filled into the template at the corresponding places.
A function definition can also be read as having a so-called stack effect (and so can substitution rules):
The function + takes two words from the stack and pushes a single word onto the stack. Looking at stack effects helps in selecting functions that fit for function composition. A function or composite function cannot consume more items from a stack than there are.
If a suitable language for specifying computations is used, function composition can be directly implemented by combining and rewriting the templates.
That is one reason why we have chosen Scheme as the specification language for functions. To provide an example, take the definition to compute the inverse of a number:
$X inverse ==> #<(/ 1 $X)>#
The concatenation + inverse can on the functional level be automatically derived as a composite function:
$X1 $X2 + inverse ==> #<(/ 1 (+ $X1 $X2))>#
Here, $X1 and $X2 are automatically generated by Concat. If template rewriting is too complicated to achieve in another target language, the behavioral effect of function composition can be simulated by passing a stack step by step from one function to another.
Any of the above substitution rules (Sec. 3.2) can also be defined as function definitions. One example,
although rather trivial, demonstrates this for dup. The two values pushed onto the stack are computed on the function level.
#X dup ==> #< #X ># #< #X >#
3.5 Connecting the Levels The previous section already indicates that the level of words and quotations (the syntactic level of Concat) and the level of functions and stacks (the semantic
level) are connected. As a matter of fact, we establish a mapping from programs to functions and from concatenation to function composition. Mathematically speaking, this is called a homomorphism.
There is a subtle detail. The homomorphism implies that the search strategy looking for substitution matches must scan a concatenation from right to left.
On each word or quotation we look through the list of substitution rules top down for a match. After a successful substitution has occurred, the search might continue to the left or start over again at the right. The first way (continuing) has the same effect, function composition has. The second way (starting over) is equivalent to passing a stack from function to function i.e. without really making use of function composition. Either way, the lookup for matching substitutions has to restart top down again.
When writing programs in Concat, we have the choice to either define substitutions that work on a purely syntactical level by rewriting concatenations of words and quotations. Or we define functions whose operational behavior is outside the reach of Concat
it is done in another computational world Concat has only an interface with but no more. For Concat,
functions and composite functions are black boxes.
Interestingly, we can seamlessly combine substitution and function evaluation.
The two approaches to interpret a given concatenation lead to two different readings. Take the following example, a simple addition:
3 0 +
Notationally, all there is are words and quotations. Assumed that there is the substitution rule
$X 0 + ==> $X, the result on the word/quotation level is mechanically retrieved as 3. On the level of functions, all there is are functions and stacks. So 3 is a constructor function that takes a stack and pushes a unique representation of itself the word(!) 3 onto the stack and returns the changed stack. So does 0.
Taken together with the function +, function composition results in a function accepting some stack and leaving 3 on top. On this level, we experience a stack being passed from function to function. This is also called trace mode.
A slightly more complicated example is the following concatenation:
6 5 dup [ 3 > ] call [ + ] [ * ] if The word dup duplicates 5 and call unquotes
[ 3 > ], leading to 6 5 5 3 > [ + ] [ * ] if.
Assumed that a function definition for > (greater than)
is given, 5 3 > results in true. Now if rewrites the concatenation to 6 5 [ + ] call. The result of 6 5 + is 11.
4 THINKING CONCATENATIVE The following subsections aim to inspire the reader of the richness that lurks behind the concatenative paradigm. We barely scratch the surface on a subject worth further investigation.
4.1 Pattern Recognition Agents The input to Concat can be viewed as a static but possibly very long sequence of words and quotations.
Substitution rules and function definitions could be viewed as agents working on the input. Each agent has some sensors that allow the agent to recognize a set of specific subsequences of words/quotations somewhere in a program. If a pattern is recognized,
the agent takes the input sensed, transforms it into a new sequence of words and quotations and replaces the input by the transformation.
Essentially, there is a pool of agents ready to process any subsequence they find a match for. This model has some similarities with biochemical processes. Let us take protein biosynthesis in a cell of a living organism as an example. After a copy of the DNA has been created (transcription), complex organic molecules called ribosomes scan the code of the DNA copy in a way comparable to pattern matching.
The ribosomes read a series of codons as an instruction of how to make a protein out of an sequence of amino acids (translation). These processes could be understood as numerous computing agents working together.
It is an interesting observation that Concat can be used in a way that is close to how nature works in creating complex living systems. This might inspire a lot of interesting and interdisciplinary research questions. Also cognitive processes rely very much on pattern recognition.
4.2 Stream Processing Another, dynamic view is to regard the input to Concat being continuously filled with new words and quotations at the outmost right and added to the top of the stack, respectively. A continuous stream of words and quotations flows in. With a certain lookahead Concat applies substitution rules and function definitions as usual. Any context information needed for processing the stream must be either left on top of the stack.
Or we introduce a meta-stack, with the stream-stack being on top, so that context information can be left somewhere else on the meta-stack.
We have built a system called channel/filter/rule
(CFR) for advanced protocol analysis in computer
networks (Reichert et al., 2008). The incoming stream of data stems from a recording or a live trace of a monitoring device intercepting the communication of two or more interacting parties. Stateless selection (filters) and stateful processing (rules) help in abstracting and extracting information that represent the information flow on the next protocol layer (channel). We suspect that such a stream processing system can be easily realized with Concat. We have not done a prototype, yet. This is research in progress.
4.3 Refinement & Process Descriptions Refinement is a very important notion in computer science, especially in the formal and theoretical branch. As a matter of fact, refinement is a wellunderstood concept. The formalization of refinement dates back to the 1970s. Refinement is a means to reduce underspecification. A specification S2 is said to refine the behavior of a specification S1 if for each input the output of S2 is also an output of S1 . Semantically, this notion is captured by logical implication.
The denotation [[S2 ]] implies [[S1 ]].
Programming languages typically do not support refinement. However, it is trivial to provide refinement in Concat, because it is built in: the notion of refinement is captured by unidirectional substitution.
Some researchers bring the notion of refinement and software development processes explicitly together; one prominent example is FOCUS (Broy and Stlen, 2001). In a yet unpublished paper we show that it is straight forward to formally describe development processes in Concat using refinement. We believe Concat to be well-suited for process modeling.
4.4 Flexibility & Expressiveness Another area for which we cannot take credit for is a demonstration of the extreme flexibility of concatenative languages this is best shown by pointing to Factor, a modern concatenative language implementation. Factor is dynamically typed and functionally impure (for practical reasons) though functional programming is a natural style in Factor. Its dominating programming model is a stack being passed from one function to the next.
Factor is powered by a kernel written in C++ of about 13.000 lines of code. Everything else is written in Factor itself. Factors syntax is extensible, it has macros, continuations and a powerful collection library.
Factor comes with an object system with inheritance, generic functions, predicate dispatch and mixins all this is implemented in Factor. Lexical vari-
ables and closures are implemented as a loadable library in Factor. An optimizing compiler outputs efficient machine code the compiler is written in Factor. Bootstrapping the system helps all libraries in Factor benefit from such optimizations. Right now,
Factor supports a number of OS/CPU combinations among which are Windows, MacOS and Linux.
Programs in Factor are extremely short and compact. Refactoring programs in Factor is easy as is in any concatenative language: any fragment of words can be factored out hence the name Factor. These features have helped the developers continuously improve the code base and its libraries. Factor outperforms other scripting languages like Ruby, Groovy or Python not only in runtime but also in the number of features supported by the language.
5 CONCLUSIONS
Remarkably, the definition of Concat fits on a single page of paper (Sec. 3.1/3.3). Yet, the concatenative paradigm shows a lot of interesting features and inspiring approaches. We barely scratched the surface of a subject worth further investigation and research.
We think that concatenative programming is a much overlooked paradigm that deserves wider recognition.
ACKNOWLEDGEMENTS Part of this work was supported by the <NAME>.
REFERENCES Adobe Systems Inc. (1999). PostScript language reference
(3rd ed.). Addison-Wesley.
<NAME>. (1978). Can programming be liberated from the von Neumann style?: A functional style and its algebra of programs. Commun. ACM, 21(8):613641.
<NAME>. and <NAME>. (2001). Specification and Development of Interactive Systems: F OCUS on Streams,
Interfaces, and Refinement. Springer.
Gibbons, J. (1999). A pointless derivation of radix sort. J.
Funct. Program., 9(3):339346.
<NAME>., <NAME>., and <NAME>. (1996).
The evolution of Forth. History of programming languages, II:625670.
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2008). A language for advanced protocol analysis in automotive networks. In Proceedings of the 30th International Conference on Software Engineering (ICSE), pages 593602. ACM. |
ack | hex | Erlang | Ack Logo Ack With Ease
===
What?
---
[`Ack`](Ack.html) is designed to be a lightweight easy-to-use implementation of *interprocess acknowledgement*, whatever it means. According to [*Wikipedia*](https://en.wikipedia.org/wiki/Acknowledgement_(data_networks)):
> In data networking, telecommunications, and computer buses, an acknowledgement (*ACK*) is a signal passed between communicating processes, computers, or devices to signify acknowledgement, or receipt of message, as part of a communications protocol. The negative-acknowledgement (*NAK* or *NACK*) signal is sent to reject a previously received message, or to indicate some kind of error. Acknowledgements and negative acknowledgements inform a sender of the receiver's state so that it can adjust its own state accordingly.
That said, when the independently running process is to pass a message around to another process—via some complicated pipe—that makes it hard to ensure that all parts of this pipe delivered the message successfully and/or instead of propagating the *success*/*failure* back through each link in this chain, the sender might simply require *ACK* back when the message was successfully delivered to the recipient.
This could involve the additional level of complexity when we are talking about tiny microservices. Not each and every microservice are to have a bucket of intercommunication clients included. Lightweight ones might have none in fact. This package is a drop-in solution for such cases.
*It probably does not apply when all involved processes come with messaging queue and fully-functional web server on board.*
Why?
---
This package comes with all the boilerplate included, including lightweight web-server based on [`Camarero`](https://hexdocs.pm/camarero/camarero.html) and process message broadcasting with [`Envío`](https://hexdocs.pm/envio). It drastically simplifies adding *ACK* support to lightweight microservices. Indeed, all one needs would be to:
* add [`Ack`](Ack.html) to the list of applications started with `App1`
* implement [`Envio.Subscriber`](https://hexdocs.pm/envio/Envio.Subscriber.html) in `App1`, listening to any of three following channels:
+ `{Ack.Horn, :ack}`
+ `{Ack.Horn, :nack}`
+ `{Ack.Horn, :error}`
* implement `App2` to `HTTP POST` to `App1.domain:30009` one of two requests (assuming `key` is somewhat negotiated upgront and known to `App1`):
+ `%{"key" => key, "value" => "ack"}` to `ack`, or
+ `%{"key" => key, "value" => "nack"}` to `nack`
The control flow of each *ACK* would be as shown below.

The *whole* implementation required from `App1` to start supporting *ACKs* would look like (besides the business logic handling *ACKs* and *NACKs*):
```
defmodule MyApp.AckHandler do
use Envio.Subscriber,
channels: [{Ack.Horn, :ack}, {Ack.Horn, :nack}, {Ack.Horn, :error}]
def handle_envio(%{key: key, status: status} = message, state) do
state = BusinessLogic.on_response(status, for: key)
{:noreply, state}
end end
```
To start expecting for the *ACK*, one simple calls [`Ack.listen/1`](Ack.html#listen/1) passing the following map as a parameter of a type `Ack.t`:
```
Ack.listen(%{key: "user_123", timeout: 5_000})
```
When the callback with a payload having `%{}` will be received, the message will be broadcasted to all the pub-sub listeners of one of aforementioned [`Ack.Horn`](Ack.Horn.html) channels, depending on the state.
---
The *whole* implementation required from `App2` to start supporting *ACKs* would be to *HTTP POST* to the predefined endpoint the message of a shape `%{key: entity_id, value: :ack or :nack}`.
That’s it.
Why not?
---
In many [over]complicated systems there are already robust queue-based ack-back pipelines presented. If the whole system already has the intercommunication pipeline, providing back-and-forth validation and monitoring of everything, [`Ack`](Ack.html) would most likely not suit. Still, it’s robust and comprehensive.
Is it of any good?
---
Sure, it is.
Ack
===
[`Ack`](#content) is a tiny application that might be used as drop-in to handle
acknowledgments between processes.
When `App1` sends *something* to `App2` it might require the acknowledgement
back confirming the successful processing of this *something*. For instance,
`App1` might call an API endpoint of `App2`, which triggers a long process,
or it might place a message into RabbitMQ and expect to receive an `ack` to
perform some cleanup, or whatever.
In this scenario, `App1` might instruct [`Ack`](#content) to listen for the `ack` from
`App2` for a message
* `%{key: key, timeout: msecs}`
where `key` is the unique identifier of the message to be acknowledged.
Upon receival, if the `key` is known to the system, [`Ack`](#content) will broadcast the message of the following shape
* `%{status: :ack, key: key}`
to `:ack` channel. `App1` should be subscribed to `{Ack.Horn, :ack}` channel in order to receive this message.
If the `key` is unknown to the system, on of the following possible messages
* `%{status: :nack, key: key}`, when the `key` was explicitly not acked
* `%{status: :invalid, key: key}`, when the `key` is not known to the system
* `%{status: :unknown, key: key, value: value}`, when somewhat unexpected happened
The former one os routed to `{Ack.Horn, :nack}` channel, the last two are broadcasted to `{Ack.Horn, :error}` channel.
The broadcast is done with [`Envío`](https://hexdocs.pm/envio/envio.html)
package, consider reading the documentation there to get more details about
subscriptions.
Currently only HTTP endpoint is supported for ack callbacks.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[listen(params)](#listen/1)
Adds a listener for the key with a timeout specified (defaults to `5` sec.)
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
Ack.Horn
===
The publisher, sending broadcasts when `ACK` is received. It serves three
different channels:
* `{Ack.Horn, :ack}` to send broadcast when the `ACK` comes
* `{Ack.Horn, :nack}` to send broadcast when the `NACK` comes
* `{Ack.Horn, :error}` to send broadcast when the client:
+ called back with the wrong key (`status: :invalid`)
+ called back with the unknown `ACK` value (`status: :unknown`)
+ did not called back and the timeout has expired.
Host and client might agree on using more verbs besides `ACK` and `NACK`.
To handle this the host should implement somewhat like the following clause:
```
def handle_envio(%{status: :unknown, key: key, value: verb} = message, state) do
state = BusinessLogic.on_other_verb(key)
{:noreply, state}
end
```
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[broadcast(message)](#broadcast/1)
[Link to this section](#functions)
Functions
=== |
dda_dda-user-crate | hex | Erlang | dda.pallet.dda-user-crate.app
===
---
#### app-configurationclj
```
(app-configuration domain-config & options)
```
Inputs: [domain-config :- UserDomainConfig & options]
Returns: DdaUserAppConfig
```
Inputs: [domain-config :- UserDomainConfig & options]
Returns: DdaUserAppConfig
```
[raw docstring](#)
---
#### app-configuration-resolvedclj
```
(app-configuration-resolved config & options)
```
Inputs: [config :- UserDomainConfigResolved & options]
Returns: DdaUserAppConfig
```
Inputs: [config :- UserDomainConfigResolved & options]
Returns: DdaUserAppConfig
```
[raw docstring](#)
---
#### crate-appclj
---
#### DdaUserAppConfigclj
---
#### InfraResultclj
---
#### UserDomainConfigclj
---
#### UserDomainConfigResolvedclj
---
#### with-userclj
dda.pallet.dda-user-crate.domain
===
---
#### Gpgclj
---
#### GpgKeyclj
---
#### infra-configurationclj
```
(infra-configuration domain-config)
```
Inputs: [domain-config :- UserDomainConfigResolved]
Returns: InfraResult
```
Inputs: [domain-config :- UserDomainConfigResolved]
Returns: InfraResult
```
[raw docstring](#)
---
#### InfraResultclj
---
#### Settingsclj
---
#### Userclj
---
#### UserDomainConfigclj
---
#### UserDomainConfigResolvedclj
dda.pallet.dda-user-crate.domain.ssh
===
---
#### authorized-keys-infra-configurationclj
```
(authorized-keys-infra-configuration authorized-keys-domain-config)
```
Inputs: [authorized-keys-domain-config :- SshAuthorizedKeysResolved]
Returns: [ssh-commons/PublicSshKey]
```
Inputs: [authorized-keys-domain-config :- SshAuthorizedKeysResolved]
Returns: [ssh-commons/PublicSshKey]
```
[raw docstring](#)
---
#### key-pair-infra-configurationclj
```
(key-pair-infra-configuration key-pair-domain-config)
```
Inputs: [key-pair-domain-config :- SshSimpleKeyPairResolved]
Returns: ssh-commons/SshKeyPair
```
Inputs: [key-pair-domain-config :- SshSimpleKeyPairResolved]
Returns: ssh-commons/SshKeyPair
```
[raw docstring](#)
---
#### Sshclj
---
#### SshAuthorizedKeysclj
---
#### SshAuthorizedKeysResolvedclj
---
#### SshSimpleKeyPairclj
---
#### SshSimpleKeyPairResolvedclj
dda.pallet.dda-user-crate.infra
===
---
#### configure-userclj
```
(configure-user config)
```
---
#### facilityclj
---
#### filter-sshclj
```
(filter-ssh config)
```
Inputs: [config :- User]
Returns: Ssh
```
Inputs: [config :- User]
Returns: Ssh
```
[raw docstring](#)
---
#### GpgKeyclj
---
#### install-userclj
```
(install-user config)
```
---
#### Sshclj
---
#### Userclj
---
#### user-crateclj
---
#### UserCrateConfigclj
---
#### with-userclj
dda.pallet.dda-user-crate.infra.bash
===
---
#### install-bashrc-dclj
```
(install-bashrc-d user-name)
```
Inputs: [user-name :- s/Str]
```
Inputs: [user-name :- s/Str]
```
[raw docstring](#)
dda.pallet.dda-user-crate.infra.gpg
===
---
#### configureclj
```
(configure user-name config)
```
Inputs: [user-name :- s/Str config :- GpgConfig]
```
Inputs: [user-name :- s/Str config :- GpgConfig]
```
[raw docstring](#)
---
#### Gpgclj
---
#### GpgConfigclj
---
#### GpgKeyclj
---
#### installclj
```
(install user-name config)
```
Inputs: [user-name :- s/Str config :- GpgConfig]
```
Inputs: [user-name :- s/Str config :- GpgConfig]
```
[raw docstring](#)
dda.pallet.dda-user-crate.infra.schema
===
---
#### Gpgclj
---
#### GpgKeyclj
---
#### Settingsclj
---
#### Sshclj
---
#### Userclj
---
#### UserCrateConfigclj
dda.pallet.dda-user-crate.infra.ssh
===
---
#### configure-authorized-keysclj
```
(configure-authorized-keys user-name ssh-config)
```
Inputs: [user-name :- s/Str ssh-config :- Ssh]
configure the authorized_keys for a given user, all existing authorized_keys will be overwritten.
```
Inputs: [user-name :- s/Str ssh-config :- Ssh]
configure the authorized_keys for a given user, all existing authorized_keys will be overwritten.
```
[raw docstring](#)
---
#### configure-ssh-keyclj
```
(configure-ssh-key user-name ssh-config)
```
Inputs: [user-name :- s/Str ssh-config :- Ssh]
configer the users ssh_key.
```
Inputs: [user-name :- s/Str ssh-config :- Ssh]
configer the users ssh_key.
```
[raw docstring](#)
---
#### Sshclj
dda.pallet.dda-user-crate.infra.user
===
---
#### configure-user-sudoclj
```
(configure-user-sudo user-name)
```
Add user to sudoers without password.
```
Add user to sudoers without password.
```
[raw docstring](#)
---
#### create-sudo-groupclj
```
(create-sudo-group)
```
---
#### create-userclj
```
(create-user user-name user-config)
```
creates a sudo user with pw is encrypted handed over.
Passwords can be generated e.g. by mkpasswd test123.
So password test1234 is representet by 3hLlUVSs1Aa1c
```
creates a sudo user with pw is encrypted handed over.
Passwords can be generated e.g. by mkpasswd test123.
So password test1234 is representet by 3hLlUVSs1Aa1c
```
[raw docstring](#)
---
#### hashed-passwordclj
```
(hashed-password user-config)
``` |
algo | cran | R | Package ‘algo’
October 12, 2022
Title Implement an Address Search Auto Completion Menu on 'Shiny' Text
Inputs Using the 'Algolia Places' 'Javascript' Library
Version 0.1.0
Description
Allows the user to implement an address search auto completion menu on 'shiny' text inputs.
This is done using the 'Algolia Places' 'JavaScript' library. See <https:
//community.algolia.com/places/>.
License MIT + file LICENSE
Encoding UTF-8
LazyData true
RoxygenNote 7.1.0
Imports glue, htmltools, jsonlite
URL https://github.com/feddelegrand7/algo
BugReports https://github.com/feddelegrand7/algo/issues
Suggests knitr, rmarkdown
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2020-06-24 13:00:10 UTC
R topics documented:
alg... 2
use_algoli... 2
algo Implement the Algolia Places address search-autocompletion menu on
shiny text inputs
Description
In order to use this function, the user must get an application ID and an API key from the Algolia
website and store them within her environment (please refer to the package’s vignette). He must
also put the use_algolia() function at the beginning of her shiny ui.
Usage
algo(element, type = "address", language = "en_US", countries = NULL)
Arguments
element the shiny text element that will be used for the Algolia Places autocompletion
menu
type Restrict the search results to a specific type. The user can choose from "city",
"country", "address", "busStop", "trainStation", "townhall" and airport. Defaults
to "address".
language Get the results in a specific language. The user can pass two letters language
codes (ISO 639-1). Defaults to "en_US"
countries Change the countries to search in. The user can pass a vector of two letters
country codes (ISO 639-1). Defaults to the whole world.
Value
An address search-autocompletion menu on shiny text inputs
use_algolia Enable the Algolia Places JavaScript library
Description
The function activates the capabilities of the Algolia Places JavaScript library. The user must put it
at the beginning of her shiny ui. This function works only when the user has set an application ID
and an API key, collected from the Algolia website, within her R environment (please refer to the
vignette).
Usage
use_algolia()
Value
called for the side effect of activating the Algolia Places JavaScript library |
should-color | rust | Rust | Crate should_color
===
Determine whether output should use colors or not.
The resulting color choice is determined by taking into account,
in order of priority from higher to lower, the following settings:
* `CLICOLOR_FORCE` environment variable (requires the `clicolor_force` feature),
* explicit user preference (for instance command line arguments),
* `CLICOLOR` environment variable (requires the `clicolor` feature),
* `NO_COLOR` environment variable (requires the `no_color` feature),
* application default choice.
If the final choice is `ColorChoice::Auto` and the feature `stream` is enabled,
the choice can be refined using `ColorChoice::for_stream` which takes into account the output stream.
The specification of `CLICOLOR`, `CLICOLOR_FORCE`, and `NO_COLOR` is inspired by:
* https://bixense.com/clicolors/,
* https://no-color.org,
with the exception that variables which are set to the empty string `""`
are treated as if they were unset.
The reason is that it is common to override environment variables by executing programs as
`VAR= cmd args...` and expect that `VAR` is unset.
`CLICOLOR_FORCE`
---
Requires the `clicolor_force` feature.
The meaning of the environment variable is the following:
* if not set or `CLICOLOR_FORCE == ""` or `CLICOLOR_FORCE == "0"`: ignore;
* if set and `CLICOLOR_FORCE != ""` and `CLICOLOR_FORCE != "0"`: `ColorChoice::Always`.
`CLICOLOR`
---
Requires the `clicolor` feature.
The meaning of the environment variable is the following:
* if not set or `CLICOLOR == ""`: ignore;
* if set and `CLICOLOR == "0"`: `ColorChoice::Never`;
* if set and `CLICOLOR != ""` and `CLICOLOR != "0"`: `ColorChoice::Auto`.
`NO_COLOR`
---
Requires the `no_color` feature.
The meaning of the environment variable is the following:
* if not set or `NO_COLOR == ""`: ignore;
* if set and `NO_COLOR != ""`: `ColorChoice::Never`.
Compatibility
---
The goal of this crate is to merge and specify the standards proposed in https://no-color.org and https://bixense.com/clicolors/.
Please note that the proposals in the latter are slightly ambiguous and undesirable
(see this issue),
hence they are merely taken as an inspiration and not followed too strictly.
Relevant quote from https://no-color.org:
> Command-line software which adds ANSI color to its output by default should
> check for a `NO_COLOR` environment variable that, when present and not an
> empty string (regardless of its value), prevents the addition of ANSI color.
Relevant quote from https://bixense.com/clicolors/:
> The idea is to have the environment variables `CLICOLOR` and `CLICOLOR_FORCE`
> (which are currently already used for this exact reason on some UNIX systems).
> When set, the following rules should apply:
> * `CLICOLOR != 0`: ANSI colors are supported and should be used when the program isn’t piped,
> * `CLICOLOR == 0`: don’t output ANSI color escape codes,
> * `CLICOLOR_FORCE != 0`: ANSI colors should be enabled no matter what.
Crate features
---
* r#“`clicolor`”# *(enabled by default)* — Enables the detection of `CLICOLOR` via `clicolor`.
* r#“`clicolor_force`”# *(enabled by default)* — Enables the detection of `CLICOLOR_FORCE` via `clicolor_force`.
* r#“`no_color`”# *(enabled by default)* — Enables the detection of `NO_COLOR` via `no_color`.
* r#“`stream`”# *(enabled by default)* — Adds `ColorChoice::for_stream`.
* r#“`clap`”# — Adds `clap_color` and conversion of `ColorChoice` to and from
`clap::ColorChoice`.
Enums
---
ColorChoicePossible color choices for the output.Constants
---
CLICOLOR`clicolor`Name of the `CLICOLOR` environment variable.CLICOLOR_FORCE`clicolor_force`Name of the `CLICOLOR_FORCE` environment variable.NO_COLOR`no_color`Name of the `NO_COLOR` environment variable.Functions
---
clap_color`clap`Compute a `clap::ColorChoice`
suitable for the `clap::App::color` setting.clicolor`clicolor`Get the setting of the `CLICOLOR` environment variable.clicolor_force`clicolor_force`Get the setting of the `CLICOLOR_FORCE` environment variable.no_color`no_color`Get the setting of the `NO_COLOR` environment variable.resolveResolve the output color choice from the environment variables and an explicit CLI preference.
Crate should_color
===
Determine whether output should use colors or not.
The resulting color choice is determined by taking into account,
in order of priority from higher to lower, the following settings:
* `CLICOLOR_FORCE` environment variable (requires the `clicolor_force` feature),
* explicit user preference (for instance command line arguments),
* `CLICOLOR` environment variable (requires the `clicolor` feature),
* `NO_COLOR` environment variable (requires the `no_color` feature),
* application default choice.
If the final choice is `ColorChoice::Auto` and the feature `stream` is enabled,
the choice can be refined using `ColorChoice::for_stream` which takes into account the output stream.
The specification of `CLICOLOR`, `CLICOLOR_FORCE`, and `NO_COLOR` is inspired by:
* https://bixense.com/clicolors/,
* https://no-color.org,
with the exception that variables which are set to the empty string `""`
are treated as if they were unset.
The reason is that it is common to override environment variables by executing programs as
`VAR= cmd args...` and expect that `VAR` is unset.
`CLICOLOR_FORCE`
---
Requires the `clicolor_force` feature.
The meaning of the environment variable is the following:
* if not set or `CLICOLOR_FORCE == ""` or `CLICOLOR_FORCE == "0"`: ignore;
* if set and `CLICOLOR_FORCE != ""` and `CLICOLOR_FORCE != "0"`: `ColorChoice::Always`.
`CLICOLOR`
---
Requires the `clicolor` feature.
The meaning of the environment variable is the following:
* if not set or `CLICOLOR == ""`: ignore;
* if set and `CLICOLOR == "0"`: `ColorChoice::Never`;
* if set and `CLICOLOR != ""` and `CLICOLOR != "0"`: `ColorChoice::Auto`.
`NO_COLOR`
---
Requires the `no_color` feature.
The meaning of the environment variable is the following:
* if not set or `NO_COLOR == ""`: ignore;
* if set and `NO_COLOR != ""`: `ColorChoice::Never`.
Compatibility
---
The goal of this crate is to merge and specify the standards proposed in https://no-color.org and https://bixense.com/clicolors/.
Please note that the proposals in the latter are slightly ambiguous and undesirable
(see this issue),
hence they are merely taken as an inspiration and not followed too strictly.
Relevant quote from https://no-color.org:
> Command-line software which adds ANSI color to its output by default should
> check for a `NO_COLOR` environment variable that, when present and not an
> empty string (regardless of its value), prevents the addition of ANSI color.
Relevant quote from https://bixense.com/clicolors/:
> The idea is to have the environment variables `CLICOLOR` and `CLICOLOR_FORCE`
> (which are currently already used for this exact reason on some UNIX systems).
> When set, the following rules should apply:
> * `CLICOLOR != 0`: ANSI colors are supported and should be used when the program isn’t piped,
> * `CLICOLOR == 0`: don’t output ANSI color escape codes,
> * `CLICOLOR_FORCE != 0`: ANSI colors should be enabled no matter what.
Crate features
---
* r#“`clicolor`”# *(enabled by default)* — Enables the detection of `CLICOLOR` via `clicolor`.
* r#“`clicolor_force`”# *(enabled by default)* — Enables the detection of `CLICOLOR_FORCE` via `clicolor_force`.
* r#“`no_color`”# *(enabled by default)* — Enables the detection of `NO_COLOR` via `no_color`.
* r#“`stream`”# *(enabled by default)* — Adds `ColorChoice::for_stream`.
* r#“`clap`”# — Adds `clap_color` and conversion of `ColorChoice` to and from
`clap::ColorChoice`.
Enums
---
ColorChoicePossible color choices for the output.Constants
---
CLICOLOR`clicolor`Name of the `CLICOLOR` environment variable.CLICOLOR_FORCE`clicolor_force`Name of the `CLICOLOR_FORCE` environment variable.NO_COLOR`no_color`Name of the `NO_COLOR` environment variable.Functions
---
clap_color`clap`Compute a `clap::ColorChoice`
suitable for the `clap::App::color` setting.clicolor`clicolor`Get the setting of the `CLICOLOR` environment variable.clicolor_force`clicolor_force`Get the setting of the `CLICOLOR_FORCE` environment variable.no_color`no_color`Get the setting of the `NO_COLOR` environment variable.resolveResolve the output color choice from the environment variables and an explicit CLI preference.
Enum should_color::ColorChoice
===
```
pub enum ColorChoice {
Never,
Auto,
Always,
}
```
Possible color choices for the output.
Clap interoperability
---
If the `clap` feature is enabled then
`ColorChoice` can be converted to and from `clap::ColorChoice`.
Moreover it implements `clap::ValueEnum`, hence can be used as
```
#[derive(clap::Parser)]
struct Cli {
/// Coloring of the output
#[clap(long, value_name = "WHEN", arg_enum, global = true)]
color: Option<should_color::ColorChoice>,
// Other arguments...
}
```
Variants
---
### `Never`
The output will not be colorized.
### `Auto`
The output will be colorized if the output device is a tty,
i.e. when the output goes directly to a text screen or terminal emulator window.
### `Always`
The output will be colorized.
Implementations
---
### impl ColorChoice
#### pub fn for_stream(&self, stream: Stream) -> bool
Available on **crate feature `stream`** only.Determine the color setting for a specific stream.
If the choice is `ColorChoice::Never` or `ColorChoice::Always`,
the result will be `false` and `true` respectively.
If the choice is `ColorChoice::Auto`, then the answer depends on whether the `stream` is a TTY or not.
See the examples `colored.rs` and `termcolor.rs` for a demonstration of how to use this method.
Trait Implementations
---
### impl Clone for ColorChoice
#### fn clone(&self) -> ColorChoice
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Available on **crate feature `clap`** only.#### fn from(color_choice: ColorChoice) -> ColorChoice
Converts to this type from the input type.### impl From<ColorChoice> for ColorChoice
Available on **crate feature `clap`** only.#### fn from(color_choice: ColorChoice) -> ColorChoice
Converts to this type from the input type.### impl Ord for ColorChoice
#### fn cmp(&self, other: &ColorChoice) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Self
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Self
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &ColorChoice) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.
#### fn partial_cmp(&self, other: &ColorChoice) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn value_variants<'a>() -> &'a [Self]Notable traits for &[u8]`impl Read for &[u8]impl Write for &mut [u8]`
All possible argument values, in display order.#### fn to_possible_value<'a>(&self) -> Option<PossibleValue<'a>The canonical argument value.
### impl Eq for ColorChoice
### impl StructuralEq for ColorChoice
### impl StructuralPartialEq for ColorChoice
Auto Trait Implementations
---
### impl RefUnwindSafe for ColorChoice
### impl Send for ColorChoice
### impl Sync for ColorChoice
### impl Unpin for ColorChoice
### impl UnwindSafe for ColorChoice
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value.
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Function should_color::clicolor
===
```
pub fn clicolor() -> Option<ColorChoice>
```
Available on **crate feature `clicolor`** only.Get the setting of the `CLICOLOR` environment variable.
The environment variable is treated as follows:
* if not set or `CLICOLOR == ""`: return `None`;
* if set and `CLICOLOR == "0"`: return `Some(``ColorChoice::Never``)`;
* if set and `CLICOLOR != ""` and `CLICOLOR != "0"`: return `Some(``ColorChoice::Auto``)`.
Function should_color::clicolor_force
===
```
pub fn clicolor_force() -> Option<ColorChoice>
```
Available on **crate feature `clicolor_force`** only.Get the setting of the `CLICOLOR_FORCE` environment variable.
The environment variable is treated as follows:
* if not set or `CLICOLOR_FORCE == ""` or `CLICOLOR_FORCE == "0"`: return `None`;
* if set and `CLICOLOR_FORCE != ""` and `CLICOLOR_FORCE != "0"`: return `Some``ColorChoice::Always``)`.
Function should_color::no_color
===
```
pub fn no_color() -> Option<ColorChoice>
```
Available on **crate feature `no_color`** only.Get the setting of the `NO_COLOR` environment variable.
The environment variable is treated as follows:
* if not set or `NO_COLOR == ""`: return `None`;
* if set and `NO_COLOR != ""`: return `Some(``ColorChoice::Never``)`.
Function should_color::clap_color
===
```
pub fn clap_color() -> ColorChoice
```
Available on **crate feature `clap`** only.Compute a `clap::ColorChoice`
suitable for the `clap::App::color` setting.
This is a convenience function equivalent to `resolve` without an explicit CLI preference and a default value of `clap::ColorChoice::Auto`.
```
#[derive(clap::Parser)]
#[clap(color = should_color::clap_color())]
struct Cli {
// Arguments...
}
```
Constant should_color::CLICOLOR
===
```
pub const CLICOLOR: &'static str = "CLICOLOR";
```
Available on **crate feature `clicolor`** only.Name of the `CLICOLOR` environment variable.
Constant should_color::CLICOLOR_FORCE
===
```
pub const CLICOLOR_FORCE: &'static str = "CLICOLOR_FORCE";
```
Available on **crate feature `clicolor_force`** only.Name of the `CLICOLOR_FORCE` environment variable.
Constant should_color::NO_COLOR
===
```
pub const NO_COLOR: &'static str = "NO_COLOR";
```
Available on **crate feature `no_color`** only.Name of the `NO_COLOR` environment variable.
Function should_color::resolve
===
```
pub fn resolve(cli: Option<ColorChoice>) -> Option<ColorChoice>
```
Resolve the output color choice from the environment variables and an explicit CLI preference.
Notice that the resolution depends on the activation of the features
`clicolor_force`,
`clicolor`, and
`no_color`.
Please refer to the crate level documentation for a detailed description of the resolution process.
Commonly this function will be called as `resolve(cli).unwrap_or(default)` to take into account a preference expressed through the CLI arguments and the default behavior of the application.
See the examples `colored.rs` and `termcolor.rs` for a demonstration of how to use this function.
Examples
---
The following examples assume that all the features
`clicolor`,
`clicolor_force`, and
`no_color`
are enabled.
* ```
std::env::set_var("CLICOLOR_FORCE", "false"); // this wins assert_eq!(resolve(Some(ColorChoice::Never)), Some(ColorChoice::Always));
```
* ```
std::env::remove_var("CLICOLOR_FORCE");
std::env::set_var("CLICOLOR", "1"); // this wins assert_eq!(resolve(None), Some(ColorChoice::Auto));
```
* ```
std::env::remove_var("CLICOLOR_FORCE");
std::env::set_var("CLICOLOR", "0"); // this wins assert_eq!(resolve(None), Some(ColorChoice::Never));
```
* ```
std::env::remove_var("CLICOLOR_FORCE");
std::env::remove_var("CLICOLOR");
std::env::set_var("NO_COLOR", "1"); // this wins assert_eq!(resolve(None), Some(ColorChoice::Never));
```
* ```
std::env::remove_var("CLICOLOR_FORCE");
std::env::remove_var("CLICOLOR");
std::env::remove_var("NO_COLOR");
assert_eq!(resolve(None), None);
``` |
FeatureTerminatoR | cran | R | Package ‘FeatureTerminatoR’
October 12, 2022
Type Package
Title Feature Selection Engine to Remove Features with Minimal
Predictive Power
Version 1.0.0
Maintainer <NAME> <<EMAIL>>
Description The aim is to take in data.frame inputs and utilises methods, such as recursive feature en-
gineering, to enable the features to be removed.
What this does differently from the other packages, is that it gives you the choice to re-
move the variables manually, or it automated this process.
Feature selection is a concept in machine learning, and statistical pipelines, whereby unimpor-
tant, or less predictive variables are eliminated from the analy-
sis, see Boughaci (2018) <doi:10.1007/s40595-018-0107-y>.
License GPL-3
Encoding UTF-8
Imports ggplot2, caret, tibble, dplyr, stats, utils, lattice, e1071,
randomForest
Suggests knitr, markdown, rmarkdown
LazyData FALSE
VignetteBuilder knitr
RoxygenNote 7.1.1
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-3534-6143>)
Repository CRAN
Date/Publication 2021-07-14 09:00:04 UTC
R topics documented:
mutlicol_terminato... 2
rfeTerminato... 3
mutlicol_terminator Multicollinearity TerminatoR - Feature Selection to remove highly cor-
related values
Description
This function looks at highly correlated features and allows for a correlation cutoff to be set. Outputs
from this function allow for correlations and covariance matrices to be created, alongside visuals
and the ability to remove highly correlated features from your statistic pipeline.
Usage
mutlicol_terminator(df, x_cols, y_cols, alter_df = TRUE, cor_sig = 0.9)
Arguments
df The data frame to pass with the x and y variables
x_cols The independent variables we want to analyse for multicollinearity
y_cols The dependent variables(s) in your predictive model
alter_df Default=TRUE - Determines whether the underlying features are removed from
the data frame, with TRUE being the default.
cor_sig Default=0.9 - A correlation significance for the cut-off in inter-feature correla-
tion
Value
A list containing the outputs highlighted hereunder:
det
• "rfe_model_fit_results" a list of the model fit results. Including the optimal features
• "rfe_reduced_features" a data.frame object with the reduced variables and data
• "rfe_original_data" a data.frame object with the original data passed for manual exclusion
based on fit outputs
• "rfe_reduced_data"output of setting the alter_df=TRUE will remove the features / IVs from
the data.frame
Examples
library(caret)
library(FeatureTerminatoR)
library(tibble)
library(dplyr)
df <- iris
mc_fit <- mutlicol_terminator(df, 1:4,5, cor_sig = 0.90, alter_df = TRUE)
#View the correlation matrix
mc_fit$corr_matrix
#View the reduced data
head(mc_fit$feature_removed_df,10)
rfeTerminator Recursive Feature Engineering SelectoR
Description
This function removes the redundant features in a model and automatically selects the best combi-
nation of features to remove. This utilises, by default, the random forest mean decrease in accuracy
methods, from the caret package, reference Kuhn (2021). This function is a wrapper for the rfe()
function
Usage
rfeTerminator(
df,
x_cols,
y_cols,
method = "cv",
kfolds = 10,
sizes = c(1:100),
alter_df = TRUE,
eval_funcs = rfFuncs,
...
)
Arguments
df data frame to fit the recursive feature engineering algorithm to
x_cols the independent variables to be used for the recursive feature engineering algo-
rithm
y_cols the dependent variables to be used in the prediction
method Default = "cv"- cross validation method for resampling, other options "repeat-
edcv"
kfolds Default = 10 - the number of k folds - train / test splits to compute when resam-
pling
sizes the sizes of the search boundary for the search
alter_df Default = TRUE - will remove the redundant features, due to having a lesser
affect on the mean decrease in accuracy, or other measures.
eval_funcs Default = rfFuncs (Random Forest Mean Decrease Accuracy method). Other
options: rfe, lmFuncs, rfFuncs, treebagFuncs, nbFuncs, pickSizeBest, pickSize-
Tolerance.
... Function forwarding to main ‘caret::rfe() function‘ to pass in additional param-
eters native to caret
Details
With the df_alter set to TRUE the recursive feature algorithm chosen will automatically remove the
features from the returned tibble embedded in the list.
Value
A list containing the outputs highlighted hereunder:
• "rfe_model_fit_results" a list of the model fit results. Including the optimal features
• "rfe_reduced_features" a data.frame object with the reduced variables and data
• "rfe_original_data" a data.frame object with the original data passed for manual exclusion
based on fit outputs
• "rfe_reduced_data"output of setting the alter_df=TRUE will remove the features / IVs from
the data.frame
References
Kuhn (2021) Recursive Feature Elimination. https://topepo.github.io/caret/recursive-feature-elimination.
html
Examples
library(caret)
library(tibble)
library(FeatureTerminatoR)
library(dplyr)
df <- iris
# Passing in the indexes as slices x values located in index 1:4 and y value in location 5
rfe_fit <- rfeTerminator(df, x_cols= 1:4, y_cols=5, alter_df = TRUE, eval_funcs = rfFuncs)
#Explore the optimal model results
print(rfe_fit$rfe_model_fit_results)
# Explore the optimal variables selected
print(rfe_fit$rfe_model_fit_results$optVariables)
# Explore the original data passed to the frame
print(head(rfe_fit$rfe_original_data))
# Explore the data adapted with the less important features removed
print(head(rfe_fit$rfe_reduced_data)) |
@reframe/crossroads | npm | JavaScript | Reframe + Crossroads = ❤️
`@reframe/crossroads`
===
Routing with [Crossroads.js](https://github.com/millermedeiros/crossroads.js).
###
Usage
Add `@reframe/crossroads` to your `reframe.config.js`:
```
module.exports = {
$plugins: [
require('@reframe/react-kit'),
require('@reframe/crossroads')
]
};
```
###
Example
```
// /plugins/crossroads/example/reframe.config.js
module.exports = {
$plugins: [
require('@reframe/react-kit'),
require('@reframe/crossroads')
]
};
```
```
// /plugins/crossroads/example/pages/hello.config.js
import React from 'react';
const HelloPage = {
route: '/hello/{name}',
view: props => {
const name = props.route.args.name;
return <div>Hello {name}</div>;
},
};
export default HelloPage;
```
```
// /plugins/crossroads/example/pages/landing.config.js
import React from 'react';
export default {
route: '/',
view: () => (
<div>
<a href="/hello/lisa">/hello/lisa</a>
<br/>
<a href="/hello/jon">/hello/jon</a>
</div>
),
};
```
Readme
---
### Keywords
none |
docassembly | ctan | TeX | # The docassembly package
<NAME>
Email: <EMAIL>
processed June 19, 2021
###### Contents
* 1 Introduction
* 2 Documentation
* 3 Package requirements
* 4 Document Assembly Methods
* 4.1 The docassembly environment
* 4.2 Supported Assembly JS API
* 4.2.1 \addWatermarkFromFile
* 4.2.2 \addWatermarkFromText
* 4.2.3 \importIcon
* 4.2.4 \importSound
* 4.2.5 \appopenDoc
* 4.2.6 \docSaveAs
* 4.2.7 \insertPages
* 4.2.8 \extractPages
* 4.2.9 \createTemplate
* 4.2.10 \importDataObject
* 4.2.11 \executeSave
* 4.2.12 \mailDoc
* 4.2.13 Signature related commands
* 5 Index
* 6 Change History
* 1 \(\ast\)package
### Supported Assembly JS API
These are convenience commands - called JavaScript helper commands - to executing security restricted JavaScript. The JS methods are defined in the aeb_pro.js file, kept as folder JavaScript. These commands are executed in a verbatim environment where '\' is still the escape character. Each of the JavaScript helper commands expects a left parenthesis '(' following the command name _on the same line_ as the command name.4 See the example below for correct usage.
Footnote 4: This requirement is consistent with JavaScript function usage.
```
\begin{docassembly} \addWatermarkFromFile( \bOnTop:false, cDIPath:"/C/AcroTeX/AcroPackages/ManualBGs/Manual_BG_Print_AeB.pdf" \); \end{docassembly}
```
For each of the methods below, see the JavaScript for Acrobat API Reference.
The command \theDocObject is normally set to this, meaning the current document. You may need to set it to some other doc object if you are trying to access a doc object other than the current one. The following are support commands for changing \theDocObject from within the docassembly environment.
All the JavaScript helper commands use \theDocObject, which is defined to be the this object. To change it within the docassembly environment is difficult. The next command aids in that problem.
```
\let\ap@mrk\@empty
25\def\ap@gobtocomma#1,{} \providecommand\chingDocObjectTo[2]{% \def#1##1\ap@mrk{#2,\ap@gobtocomma#1}
```
The above defines a new command given by #1. The command has one argument which is all content up to the terminating mark \ap@mrk. The trick to removing \thisDocObject and replacing it with #2, in the above definition, we insert '(#2' followed by \ap@gobtocomma, which absorbs \thisDocObject, (absorbs everything through the first comma), followed by all content (##1); the second \@gobble absorbs the left parenthesis that opens the argument.
```
\def\ap@TF{aebTrustedFunctions} An example of usage of \chingDocObject is \chingDocObjectTo{\newDO}{doc} expanded above the docassembly environment. Later, we can say,
``` \chingDocObjectTo{newDO}{doc} \begin{docassembly}... \docSaveAs\newDO({cPath:_path});... \end{documentbly} ```That is, it is placed immediately after any of the commands below that uses \theDocObject.
\theDocObject This command is used in the definition of all JavaScript helper commands, as seen in the definition of \DeclareJSHelper below. It is set to the doc object this. It can be changed using \chngDocObjectTo, as described above.
\def\theDocObject{this}
\DeclareJSHelper A general purpose command for defining what I am calling JavaScript helper commands.
\providecommand\DeclareJSHelper[2]{% \def#1##1({\ap@TF(##1}theDocObject,#2,\ap@mrk)} For example, we declare \DeclareJSHelper{\docSaveAs}{aebDocSaveAs} below, the declaration defines a new command, \docSaveAs:
\def\docSaveAs#1({\ap@TF(#1}theDocObject,aebDocSaveAs,\ap@mrk)} Note that the argument of \docSaveAs is delimited by the left parenthesis, thus #1 is everything through that opening parenthesis. This approach allows more flexibility in the definition, there can be spaces following the command name \docSaveAs ({path: _path}), for example.
\retAbsPathAs Several methods require an absolute path to the current folder. The code is,
var _path=this.path; var pos=_path.lastIndexOf("/"); _path=_path.substring(0,pos); We simplify this code for the document author in the form of the command \rtAbsPathAs(\(\js-var));, where \(\js-var) is a JavaScript variable that will hold the absolute path to the current folder; eg, \rtAbsPathAs(_path); expands to the above code.
\def\retAbsPathAs(#1){var #1=this.path;~~J% var pos=#1.lastIndexOf("/");~~J%
#1=#1.substring(0,pos)} We new begin the documentation of the "helper" commands. For documentation of the arguments of these commands, refer to the JavaScript(tm) for Acrobat(r) API Reference.
#### 4.2.1 \addWatermarkFromFile
\addWatermarkFromFile This is the method _Doc.addWatermarkFromFile.
**Demo file:** watermark-file.tex
\DeclareJSHelper{\addWatermarkFromFile}{aebAddWatermarkFromFile}
#### 4.2.2 \addWatermarkFromText
This is the method _Doc.addWatermarkFromText_.
**Demo file:** watermark-text.tex
```
30\DeclareJSHelper{\addWatermarkFromText}{aebAddWatermarkFromText}
```
#### 4.2.3 \importIcon
This is the method _Doc.importIcon_.
**Demo file:** import-icons.tex
```
37\DeclareJSHelper{\importIcon}{aebImportIcon}
```
#### 4.2.4 \importSound
This is the method _Doc.importSound_.
**Demo file:** import-sound.tex
```
38\DeclareJSHelper{\importSound}{aebImportSound}
```
#### 4.2.5 \appopenDoc
This is the method app.openDoc. Opens a document and return a Doc object.
**Demo file:** open-doc.tex
```
39\DeclareJSHelper{\appopenDoc}{aebAppOpenDoc}
```
#### 4.2.6 \docSaveAs
This is the method _Doc._saveAs. Saves a PDF or converts a PDF to another format to a specified path.
**Demo file:** doc-saveas.tex
```
40\DeclareJSHelper{\docSaveAs}{aebDocSaveAs}
```
#### 4.2.7 \insertPages
This is the method _Doc._insertPages.
**Demo file:** insert-pages.tex
```
41\DeclareJSHelper{\insertPages}{aebInsertPages}
```
#### 4.2.8 \extractPages
This is the method _Doc._extractPages.
**Demo file:** extract-pages.tex
```
42\DeclareJSHelper{\extractPages}{aebExtractPages}
```
#### 4.2.9 \createTemplate
This is the method _Doc.createTemplate_. This is a feature that I had great hopes for. With templates, you can create hidden pages that can be made visible (in AA); once a template is created, it can be spawned and deleted in AR.
```
<>DeclareJSHelper{<createTemplate>{aebCreateTemplate> }
#### 4.2.10 \importDataObject
This is the method _Doc.importDataObject_, used to attach files to a PDF; alternately, \attachFile is a more intuitive name for the operation performed.
**Demo file:** attach-files.tex
``` <>DeclareJSHelper{<importDataObject>{aebImportDataObject> } <>DeclareJSHelper{<attachFile>{aebImportDataObject> }
#### 4.2.11 \executeSave
To save the document, use at the _end of the doc assembly environment_. Usage: \executeSave(). The \@gobble used below absorbs the comma that is placed immediately after the second argument by \DeclareJSHelper.
```
<>DeclareJSHelper{<executeSave>{aebSaveAs,"Save"\@gobble> }
#### 4.2.12 \mailDoc
This is the method _Doc.mailDoc_. Attach the current PDF and email it someone. Must have a default mail client registered by Acrobat under Edit > Preferences > Mail Accounts.
**Demo file:** mail-doc.tex
``` <>DeclareJSHelper{<mailDoc>{aebMailDoc> }
#### 4.2.13 Signature related commands
**Demo files:** sign.tex, certifyinvisible.tex
The \sigInfo command is used for entering signing formation into what will become an object. \signatureSign takes no arguments, but uses the info entered by \sigInfo. An example is
\begin{docassembly} \sigInfo{ cSigFieldName: "sigOfDPS", ohandler: security.PPKLiteHandler, cert: "<name>.pfx", password: "<password>", oInfo: { location: "Niceville, FL", reason: "I am approving this document", contactInfo: "<EMAIL>", appearance: "My Signature" } }; \signatureSign\end{docassembly} \sigInfo command is a latex interface to creating the oSigInfo object.
48\newcommand{\sigInfo}{varoSigInfo=} 49\def\sigFieldObj(#1){varoSigField=this.getField(#1)} For the \signatureSetSeedValue, the field object is required. This function assumes that the JavaScript variable oSigField is the field object. For examle,
\begin{docassembly} \sigFieldObj("sigOfDPS"); \signatureSetSeedValue({ lockDocument:true, appearanceFilter:"My Signature", reasons:["This is a reason", "This is a better reason"], flags:0x80|0x100|0x8 }); \end{docassembly} The signatureSetSeedValue() method seeds a signature field with various default values available to the signer.
\begin{docassembly} var sv={ mdp: "defaultAndComments", reasons:["This is a reason", "This is a better reason"], flags:0x80|0x100|0x8 }; \sigFieldObj("sigOfDPS"); \signatureSetSeedValue(sv); \end{docassembly} \signatureSetSeedValue This is the Field.signatureSetSeedValue method. The field name is passed to this method through the cSigFieldName property of the oSigField object.
50\def\signatureSetSeedValue#1{% \ap#TF( oSigField, aebSignatureSetSeedValue, } \signatureSign) This is the Field.signatureSign method. The field name is passed to this method through the cSigFieldName property of the oSigField object. The function \signatureSign takes the info in the oSigInfo object, gets the security handler object, logs into the handler, calls signatureSetSeedValue if the sv property is in the oSigInfo object, and signs the field.
52\begin{defineJS}[\makecent\%]\dfnJSCR{^-}J]}{\signatureSign}
53if(typeofoSigInfo.oHandler=="undefined" )
54oSigInfo.oHandler=security.PPKLiteHandler;
55varengine=aebTrustedFunctions(security,%
56aebSecurityGetHandler, oSigInfo.oHandler );
57varpath2Cert=(typeofoSigInfo.path2Cert=="undefined")? %
58aebTrustedFunctions(this,aebAppGetPath,%59{cCategory:"user"}}"/Security"+"/"+oSigInfo.cert:%
60oSigInfo.path2Cert;
61aebTrustedFunctions(engine,aebSecurityHandlerLogin,%
62{cPassword:oSigInfo.password,cDIPath:path2Cert});
63varoSigField=this.getField(oSigInfo.cSigFieldName);
64oSigInfo.oInfo.password=oSigInfo.password;
65if(typeofoSigInfo.svl="undefined"){
66for(varoinosigInfo.sv)}
67oSigInfo.oInfo[o]=oSigInfo.sv[o];
68}
69varoSigArgs={oSig:engine,oInfo:oSigInfo.oInfo};
70if(typeofoSigInfo.cLegalAttest!="undefined")
71oSigArgs.cLegalAttest=oSigInfo.cLegalAttest;
72if(typeofoSigInfo.cDIPath!="undefined")
73oSigArgs.cDIPath=oSigInfo.cDIPath;
74if(typeofoSigInfo.bUI!="undefined")
75oSigArgs.bUI=oSigInfo.bUI;
76aebTrustedFunctions(oSigField,aebSignatureSign,oSigArgs);
77\end{defineJS}
```
<certifyInvisibleSign
```
Listing 1: The _Doc.certifyInvisibleSign_ method. This command uses the trusted version of certifyInvisibleSign to sign. The command requires that \sigInfo is populated appropriately.
```
<begin{docassembly} <sigInfo{ <ert:"<name>.pfx", password:"<password>", cLegalAttest:"Certifiedusing JavaScript", bUI:false, oInfo:{ location:"Niceville, FL", reason:"Iamcertifyingthisdocument", mdp:"defaultAndComments", } }; }; }certifyInvisibleSign
```
78{defineJS}[makecmt\%dfnJSCR[^JJ]{\certifyInvisibleSign}
79if(typeofoSigInfo.oHandler=="undefined")
80oSigInfo.oHandler=security.PPKLiteHandler;
81varengine=aebTrustedFunctions(security,%
82aebSecurityGetHandler,oSigInfo.oHandler);
83varpath2Cert=aebTrustedFunctions(this,aebAppGetPath,%
84{cCategory:"user"}}+"/Security"+"/"+oSigInfo.cert;
85aebTrustedFunctions(engine,aebSecurityHandlerLogin,%
86{cPassword:oSigInfo.password,cDIPath:path2Cert});
87oSigInfo.oInfo.password=oSigInfo.password;
88varoSigArgs={oSig:engine,oInfo:oSigInfo.oInfo};
89if(typeofoSigInfo.cLegalAttest!="undefined")oSigma.cLegalAttest=oSigmaInfo.cLegalAttest;
91if(typeofoSigmaInfo.cDIPath!="undefined")
92oSigma.cDIPath=oSigmaInfo.cDIPath;
93if(typeofoSigmaInfo.bUI!="undefined")
94oSigma.bUI=oSigmaInfo.bUI;
95aebTrustedFunctions(this,aebCertifyInvisibleSign,oSigArgs);
96\end{defineJS}
97\da@restoreCats
98\(\langle\)/package\(\rangle\)
|
@storybook/vue-vite | npm | JavaScript | [Storybook for Vue 2 and Vite](#storybook-for-vue-2-and-vite)
===
Storybook for Vue 2 is a UI development environment for your Vue 2 components.
With it, you can visualize different states of your UI components and develop them interactively.
Storybook runs outside of your app.
So you can develop UI components in isolation without worrying about app specific dependencies and requirements.
[Getting Started](#getting-started)
---
```
cd my-vue-app npx storybook@latest init
```
For more information visit: [storybook.js.org](https://storybook.js.org)
[Starter Storybook-for-Vue Boilerplate project with](#starter-storybook-for-vue-boilerplate-project-with-vuetify-material-component-framework) [Vuetify](https://github.com/vuetifyjs/vuetify) Material Component Framework
---
<https://github.com/white-rabbit-japan/vue-vuetify-storybook---
Storybook also comes with a lot of [addons](https://storybook.js.org/addons) and a great API to customize as you wish.
You can also build a [static version](https://storybook.js.org/docs/vue/sharing/publish-storybook) of your Storybook and deploy it anywhere you want.
[Vue Notes](#vue-notes)
---
* When using global custom components or extensions (e.g., `Vue.use`). You will need to declare those in the `./storybook/preview.js`.
[Known Limitations](#known-limitations)
---
In Storybook story and decorator components, you can not access the Vue instance in factory functions for default prop values:
```
{
props: {
foo: {
default() {
return this.bar; // does not work!
}
}
}
}
```
Readme
---
### Keywords
* storybook |
optBiomarker | cran | R | Package ‘optBiomarker’
October 14, 2022
Type Package
Title Estimation of Optimal Number of Biomarkers for Two-Group
Microarray Based Classifications at a Given Error Tolerance
Level for Various Classification Rules
Version 1.0-28
Date 2020-12-23
Depends R (>= 2.10), rpanel
Imports MASS, randomForest, e1071,ipred, msm, rgl, Matrix
Author <NAME> <<EMAIL>>
Maintainer <NAME> <<EMAIL>>
Description Estimates optimal number of biomarkers for two-group
classification based on microarray data.
License GPL (>= 2)
Repository CRAN
Date/Publication 2021-01-18 23:50:02 UTC
NeedsCompilation no
R topics documented:
optBiomarker-packag... 2
classificationErro... 3
errorDbas... 5
optimiseBiomarke... 6
realBiomarke... 8
simDat... 8
optBiomarker-package R package for estimating optimal number of biomarkers at a given
error tolerance level for various classification rules
Description
Using interactive control panel (rpanel) and 3D real-time rendering system (rgl) , this package
provides a user friendly GUI for estimating the minimum number of biomarkers (variables) needed
to achieve a given level of accuracy for two-group classification problems based on microarray data.
Details
The function optimiseBiomarker is a user friendly GUI for interrogating the database of leave-
one-out cross-validation errors, errorDbase, to estimate optimal number of biomarkers for mi-
croarray based classifications. The database is built on the basis of simulated data using the
classificationError function. The function simData is used for simulating microarray data
for various combinations of factors such as the number of biomarkers, training set size, biological
variation, experimental variation, fold change, replication, and correlation.
Author(s)
<NAME>, <NAME>, <NAME>
Maintainer: <NAME> <<EMAIL>>.
References
Khondoker, <NAME>., <NAME>., <NAME>., <NAME>. et al.(2010). Multi-
factorial analysis of class prediction error: estimating optimal number of biomarkers for various
classification rules. Journal of Bioinformatics and Computational Biology, 8, 945-965.
Breiman, L. (2001). Random Forests, Machine Learning 45(1), 5–32.
Chang, Chih-Chung and Lin, Chih-Jen: LIBSVM: a library for Support Vector Machines, https:
//www.csie.ntu.edu.tw/~cjlin/libsvm/.
<NAME>. (1996). Pattern Recognition and Neural Networks.Cambridge: Cambridge University
Press.
<NAME>. and <NAME>. (1997). Improvements on Cross-Validation: The .632+ Bootstrap
Estimator. Journal of the American Statistical Association 92(438), 548–560.
Bowman, A., <NAME>., <NAME>. and <NAME>. (2007). rpanel: Simple interactive
controls for R functions using the tcltk package. Journal of Statistical Software 17(9).
See Also
simData classificationError optimiseBiomarker
Examples
if(interactive()){
data(errorDbase)
optimiseBiomarker(error=errorDbase)
}
classificationError Estimation of misclassification errors (generalisation errors) based on
statistical and various machine learning methods
Description
Estimates misclassification errors (generalisation errors), sensitivity and specificity using cross-
validation, bootstrap and 632plus bias corrected bootstrap methods based on Random Forest, Sup-
port Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour methods.
Usage
## S3 method for class 'data.frame'
classificationError(
formula,
data,
method=c("RF","SVM","LDA","KNN"),
errorType = c("cv", "boot", "six32plus"),
senSpec=TRUE,
negLevLowest=TRUE,
na.action=na.omit,
control=control.errorest(k=NROW(na.action(data)),nboot=100),
...)
Arguments
formula A formula of the form lhs ~ rhs relating response (class) variable and the ex-
planatory variables. See lm for more detail.
data A data frame containing the response (class membership) variable and the ex-
planatory variables in the formula.
method A character vector of length 1 to 4 representing the classification methods to
be used. Can be one or more of "RF" (Random Forest), "SVM" (Support Vector
Machines), "LDA" (Linear Discriminant Analysis) and "KNN" (k-Nearest Neigh-
bour). Defaults to all four methods.
errorType A character vector of length 1 to 3 representing the type of estimators to be
used for computing misclassification errors. Can be one or more of the "cv"
(cross-validation), "boot" (bootstrap) and "632plus" (632plus bias corrected
bootstrap) estimators. Defaults to all three estimators.
senSpec Logical. Should sensitivity and specificity (for cross-validation estimator only)
be computed? Defaults to TRUE.
negLevLowest Logical. Is the lowest of the ordered levels of the class variable represnts the
negative control? Defaults to TRUE.
na.action Function which indicates what should happen when the data contains NA’s, de-
faults to na.omit.
control Control parameters of the the function errorest.
... additional parameters to method.
Details
In the current version of the package, estimation of sensitivity and specificity is limited to cross-
validation estimator only. For LDA sample size must be greater than the number of explanatory
variables to avoid singularity. The function classificationError does not check if this is satis-
fied, but the underlying function lda produces warnings if this condition is violated.
Value
Returns an object of class classificationError with components
call The call of the classificationError function.
errorRate A length(errorType) by length(method) matrix of classification errors.
rocData A 2 by length(method) matrix of sensitivities (first row) and specificities (sec-
ond row).
Author(s)
<NAME>, <NAME>, <NAME>
Maintainer: <NAME> <<EMAIL>>.
References
Khondoker, <NAME>., <NAME>, <NAME>., <NAME>., <NAME>. et al.(2010). Multi-
factorial analysis of class prediction error: estimating optimal number of biomarkers for various
classification rules. Journal of Bioinformatics and Computational Biology, 8, 945-965.
<NAME>. (2001). Random Forests, Machine Learning 45(1), 5–32.
<NAME> and <NAME>: LIBSVM: a library for Support Vector Machines, https:
//www.csie.ntu.edu.tw/~cjlin/libsvm/.
<NAME>. (1996). Pattern Recognition and Neural Networks.Cambridge: Cambridge University
Press.
<NAME>. and <NAME>. (1997). Improvements on Cross-Validation: The .632+ Bootstrap
Estimator. Journal of the American Statistical Association 92(438), 548–560.
See Also
simData
Examples
## Not run:
mydata<-simData(nTrain=30,nBiom=3)$data
classificationError(formula=class~., data=mydata)
## End(Not run)
errorDbase Database of leave-one-out cross validation errors for various combi-
nations of data characteristics
Description
This is a 7-dimensional array (database) of leave-one-out cross validation errors for Random Forest,
Support Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour classifiers. The
database is the basis for estimating the optimal number of biomarkers at a given error tolerance
level using optimiseBiomarker function. See Details for more information.
Usage
data(errorDbase)
Format
7-dimensional numeric array.
Details
The following table gives the dimension names, lengths and values/levels of the data object errorDbase.
Dimension name Length Values/Levels
No. of biomarkers 14 (1-6, 7, 9, 11, 15, 20, 30, 40, 50, 100)
Size of replication 5 (1, 3, 5, 7, 10)
Biological variation (σb ) 4 (0.5, 1.0, 1.5, 2.5)
Experimental variation (σe ) 4 (0.1, 0.5, 1.0, 1.5)
Minimum (Average) fold change 4 (1 (1.73), 2(2.88), 3(4.03), 5(6.33))
Training set size 5 (10, 20, 50, 100, 250)
Classification method 3 (Random Forest, Support Vector Machine, k-Nearest Neighbour)
We have a plan to expand the database to a 8-dimensional one by adding another dimension to store
error rates at different level of correlation between biomarkers. Length of each dimension will also
be increased leading to a bigger database with a wider coverage of the parameter space. Current
version of the database contain error rates for independent (correlation = 0) biomarkers only. Also,
it does not contain error rates for Linear Discriminant Analysis, which we plan to implement in the
next release of the package. With the current version of the database, optimal number of biomarkers
can be estimated using the optimiseBiomarker function for any intermediate values of the factors
represented by the dimensions of the database.
Author(s)
<NAME>, <NAME>, <NAME>hazal
Maintainer: <NAME> <<EMAIL>>.
References
<NAME>., <NAME>, <NAME>., <NAME>., <NAME>. et al.(2010). Multi-
factorial analysis of class prediction error: estimating optimal number of biomarkers for various
classification rules. Journal of Bioinformatics and Computational Biology, 8, 945-965.
See Also
optimiseBiomarker
optimiseBiomarker Estimates optimal number of biomarkers at a given error tolerance
level for various classification rules
Description
Using interactive control panel (see rpanel) and 3D real-time rendering system (rgl), this package
provides a user friendly GUI for estimating the minimum number of biomarkers (variables) needed
to achieve a given level of accuracy for two-group classification problems based on microarray data.
Usage
optimiseBiomarker (error,
errorTol = 0.05,
method = "RF", nTrain = 100,
sdB = 1.5,
sdW = 1,
foldAvg = 2.88,
nRep = 3)
Arguments
error The database of classification errors. See errorDbase for details.
errorTol Error tolerance limit.
method Classification method. Can be one of "RF", "SVM", and "KNN" for Random
Forest, Support Vector Machines, Linear Discriminant Analysis and k-Nearest
Neighbour respectively.
nTrain Training set size, i.e., the total number of biological samples in group 1 and
group 2.
sdB Biological variation (σb ) of data in log (base 2) scale.
sdW Experimental (technical) variation (σe ) of data in log (base 2) scale.
foldAvg Average fold change of the biomarkers.
nRep Number of technical replications.
Details
The function optimiseBiomarker is a user friendly GUI for interrogating the database of leave-
one-out cross-validation errors, errorDbase, to estimate optimal number of biomarkers for mi-
croarray based classifications. The database is built on the basis of simulated data using the
classificationError function. The function simData is used for simulating microarray data
for various combinations of factors such as the number of biomarkers, training set size, biological
variation, experimental variation, fold change, replication, and correlation.
Author(s)
<NAME>, <NAME>, <NAME>
Maintainer: <NAME> <<EMAIL>>.
References
Khondoker, <NAME>., <NAME>, <NAME>., <NAME>., <NAME>. et al.(2010). Multi-
factorial analysis of class prediction error: estimating optimal number of biomarkers for various
classification rules. Journal of Bioinformatics and Computational Biology, 8, 945-965.
<NAME>. (2001). Random Forests, Machine Learning 45(1), 5–32.
<NAME> and <NAME>: LIBSVM: a library for Support Vector Machines, https:
//www.csie.ntu.edu.tw/~cjlin/libsvm/.
<NAME>. (1996). Pattern Recognition and Neural Networks.Cambridge: Cambridge University
Press.
<NAME>. and <NAME>. (1997). Improvements on Cross-Validation: The .632+ Bootstrap
Estimator. Journal of the American Statistical Association 92(438), 548–560.
<NAME>., <NAME>., <NAME>. and <NAME>. (2007). rpanel: Simple interactive
controls for R functions using the tcltk package. Journal of Statistical Software 17(9).
See Also
simData classificationError
Examples
if(interactive()){
data(errorDbase)
optimiseBiomarker(error=errorDbase)
}
realBiomarker A set of 54359 median gene expressions in log (base 2) scale
Description
This data set contains a set of 54359 log base 2 gene expression values from a neonatal whole
blood gene expression study described in Smith et al. (2007). The data represent the median
of 28 microarrays corresponding to 28 control (healthy) patients of the neonatal study. This data
set is used as a base expressions set for simulating biomarker data using simData function of the
optBiomarker package.
Usage
data(realBiomarker)
Format
A vector of 54359 gene expressions in log (base 2) scale.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>.,
<NAME>., <NAME>., <NAME>.and <NAME>. (2007). Quantitative assessment of whole
blood RNA as a potential biomarker for infectious disease. Analyst 132, 1200–1209.
<NAME>., <NAME>. <NAME>., <NAME>., <NAME>. et al.(2010). Multi-
factorial analysis of class prediction error: estimating optimal number of biomarkers for various
classification rules. Journal of Bioinformatics and Computational Biology, 8, 945-965.
simData Simulation of microarray data
Description
The function simulates microarray data for two-group comparison with user supplied parameters
such as number of biomarkers (genes or proteins), sample size, biological and experimental (tech-
nical) variation, replication, differential expression, and correlation between biomarkers.
Usage
simData(nTrain=100,
nGr1=floor(nTrain/2),
nBiom=50,nRep=3,
sdW=1.0,
sdB=1.0,
rhoMax=NULL, rhoMin=NULL, nBlock=NULL,bsMin=3, bSizes=NULL, gamma=NULL,
sigma=0.1,diffExpr=TRUE,
foldMin=2,
orderBiom=TRUE,
baseExpr=NULL)
Arguments
nTrain Training set size,.i.e., the total number of biological samples in group 1 (nGr1)
and group 2.
nGr1 Size of group 1. Defaults to floor(nTrain/2).
nBiom Number of biomarkers (genes, probes or proteins).
nRep Number of technical replications.
sdW Experimental (technical) variation (σe ) of data in log (base 2) scale.
sdB Biological variation (σb ) of data in log (base 2) scale.
rhoMax Maximum Pearson’s correlation coefficient between biomarkers. To ensure pos-
itive definiteness, allowed values are restricted between 0 and 0.95 inclusive. If
NULL, set to runif(1,min=0.6,max=0.8).
rhoMin Minimum Pearson’s correlation coefficient between biomarkers. To ensure pos-
itive definiteness, allowed values are restricted between 0 and 0.95 inclusive. If
NULL, set to runif(1,min=0.2,max=0.4).
nBlock Number of blocks in the block diagonal (Hub-Toeplitz) correlation matrix. If
NULL, set to 1 for nBiom<5 and randomly selected from c(1:floor(nBiom/bsMin))
for nBiom>=5.
bsMin Minimum block size. bsMin=3 by default.
bSizes A vector of length nBlock representing the block sizes (should sum to nBlock).
If NULL, set to c(bs+mod,rep(bs,nBlock-1), where bs is the integer part of
nBiom/nBlock and mod is the remainder after integer division.
gamma Specifies a correlation structure. If NULL, assumes independence.gamma=0 in-
dicates a single block exchangeable correlation marix with constant correlation
rho=0.5*(rhoMin+rhoMax). A value greater than zero indicates block diagonal
(Hub-Toeplitz) correlation matrix with decline rate determined by the value of
gamma. Decline rate is linear for gamma=1.
sigma Standard deviation of the normal distribution (before truncation) where fold
changes are generated from. See details.
diffExpr Logical. Should systematic difference be introduced between the data of the two
groups?
foldMin Minimum value of fold changes. See details.
orderBiom Logical. Should columns (biomarkers) be arranged in order of differential ex-
pression?
baseExpr A vector of length nBiom to be used as base expressions µ. See realBiomarker
for details.
Details
Differential expressions are introduced by adding zδ to the data of group 2 where δ values are gen-
erated from a truncated normal distribution and z is randomly selected from (-1,1) to characterise
up- or down-regulation.
Assuming that Y is N (µ, σ 2 ), and A = [a1 , a2 ], a subset of −Inf < y < Inf , the conditional
distribution of Y given A is called truncated normal distribution:
f (y, µ, σ) = (1/σ)φ((y − µ)/σ)/(Φ((a2 − µ)/σ) − Φ((a1 − µ)/σ))
for a1 <= y <= a2 , and 0 otherwise,
where µ is the mean of the original Normal distribution before truncation, σ is the corresponding
standard deviation,a2 is the upper truncation point, a1 is the lower truncation point, φ(x) is the
density of the standard normal distribution, and Φ(x) is the distribution function of the standard
normal distribution. For simData function, we consider a1 = log2 (foldMin) and a2 = Inf . This
ensures that the biomarkers are differentially expressed by a fold change of foldMin or more.
Value
A dataframe of dimension nTrain by nBiom+1. The first column is a factor (class) representing
the group memberships of the samples.
Author(s)
<NAME>, <NAME>, <NAME>
Maintainer: <NAME> <<EMAIL>>.
References
Khondoker, <NAME>., <NAME>., <NAME>., <NAME>. et al.(2010). Multi-
factorial analysis of class prediction error: estimating optimal number of biomarkers for various
classification rules. Journal of Bioinformatics and Computational Biology, 8, 945-965.
See Also
classificationError
Examples
simData(nTrain=10,nBiom=3) |
@blockly/shadow-block-converter | npm | JavaScript | [@blockly/shadow-block-converter](#blocklyshadow-block-converter-)
===
A [Blockly](https://www.npmjs.com/package/blockly) plugin for automatically converting shadow blocks to real blocks when the user edits them.
[Installation](#installation)
---
### [Yarn](#yarn)
```
yarn add @blockly/shadow-block-converter
```
### [npm](#npm)
```
npm install @blockly/shadow-block-converter --save
```
[Usage](#usage)
---
This plugin exports a function called `shadowBlockConversionChangeListener`. If you add it as a change listener to your blockly workspace then any shadow block the user edits will be converted to a real block. See below for an example using it with a workspace.
### [JavaScript](#javascript)
```
import * as Blockly from 'blockly';
import {shadowBlockConversionChangeListener} from '@blockly/shadow-block-converter';
function start() {
const workspace = Blockly.inject('blocklyDiv', {toolbox: toolbox});
workspace.addChangeListener(shadowBlockConversionChangeListener);
}
```
[License](#license)
---
Apache 2.0
Readme
---
### Keywords
* blockly
* shadow
* block |
stupidedi | ruby | Ruby | Stupidedi
===
[](http://travis-ci.org/irobayna/stupidedi) [](http://badge.fury.io/gh/irobayna%2Fstupidedi) [](https://codeclimate.com/github/irobayna/stupidedi) [](http://inch-ci.org/github/irobayna/stupidedi)

* [GitHub project](https://github.com/irobayna/stupidedi)
* [Human Documentation](https://github.com/irobayna/stupidedi/tree/master/doc)
* [API Documentation](http://rubydoc.info/github/kputnam/stupidedi/master/frames)
Stupidedi is a high-quality library for parsing, generating, validating,
and manipulating ASC X12 EDI documents. Very roughly, it's jQuery for EDI.
For those unfamiliar with ASC X12 EDI, it is a data format used to encode common business documents like purchase orders, delivery notices, and health care claims. It is similar to XML in some ways,
but precedes it by about 15 years; so if you think XML sucks, you will love to hate EDI.
### Credits
* **Author**: [<NAME>](https://github.com/kputnam)
* **Maintainer**: [<NAME>](https://github.com/irobayna)
What problem does it solve?
---
Transaction set specifications can be enormous, boring, and vague.
Trading partners can demand strict adherence (often to their own unique interpretation of the specification) of the documents you generate.
However, documents they generate themselves are often non-standard and require flexibility to parse them.
Stupidedi enables you to encode these transaction set specifications directly in Ruby. From these specifications, it will generate a parser to read incoming messages and a DSL to generate outgoing messages. This approach has a huge advantage over writing a parser from scratch, which can be error-prone and difficult to change.
Significant thought was put into the design of the library. Some of the features are described here.
### Robust tokenization and parsing
Delimiters, line breaks, and out-of-band data between interchanges are handled correctly. While many trading partners follow common conventions,
it only takes one unexpected deviation, like swapping the ":" and "~"
delimiters, to render a hand-written parser broken.
Stupidedi handles many edge cases that can only be anticipated by reading carefully between the lines of the X12 documentation.
### Instant feedback on error conditions
When generating EDI documents, validation is performed incrementally on each segment. This means the instant your client code violates the specification, an exception is thrown with a meaningful stack trace.
Other libraries only perform validation after the entire document has been generated, while some don't perform validation at all.
Stupidedi performs extensive validation and ensures well-formedness.
See the human readable documentation in doc/Generating.md for more details.
### Encourages readable client code
Unlike other libraries, generating documents doesn't involve naming obscure identifiers from the specification (like C001, DE522 or LOOP2000),
for elements of the grammar that don't actually appear in the output.
Like HAML or Builder::XmlMarkup, the DSL for generating documents closely matches terminology from the problem domain. You can see in the example below that code looks very similar to an EDI document. This makes it easy to understand, assuming a reasonable familiarity with EDI.
### Efficient parsing and traversing
The parser is designed using immutable data structures, making it thread-safe for runtimes that can utilize multiple cores. In some cases, immutability places higher demand on garbage collection, this has been somewhat mitigated with careful optimization.
**Note:** Further optimizations are planned for late 2021. Until then, modestly large files can consume excessive memory to the point all CPU time is spent in garbage collection. In more than ten years since this library was put into production, this seems to rarely happen in practice (as X12 files are most often very small), but until these improvements are merged, it can be difficult to work around. See [#65](https://github.com/irobayna/stupidedi/issues/65) for more discussion. The current work in progress is in the [`gh-65.3`](https://github.com/irobayna/stupidedi/tree/gh-65.3) branch.

| segments | 1.9.3 | 1.9.2 | rbx-head | jruby-1.6.6 |
| --- | --- | --- | --- | --- |
| 1680 | 2107.90 | 2007.17 | 503.14 | 317.52 |
| 3360 | 2461.54 | 2420.75 | 731.71 | 477.07 |
| 6720 | 2677.29 | 2620.90 | 950.63 | 685.15 |
| 13440 | 2699.88 | 2663.50 | 1071.00 | 897.50 |
| 26880 | 2558.54 | 2510.51 | 1124.50 | 1112.67 |
| 53760 | 2254.94 | 2164.16 | 1039.81 | 1292.62 |
These benchmarks aren't scientific by any means. They were performed on a MacBook Pro, 2.2GHz Core i7 with 8GB RAM by reading the X222-HC837 fixture data files, in serial. The results show the parser runtime is O(n), linear in the size of the input, but the drop in throughput at 13K+ segments is likely due to memory allocation. The steady increase in throughput on JRuby and Rubinus is probably attributable optimizations performed by the JIT compiler.
Lastly, these results should approximate the performance of document generation with BuilderDsl, except BuilderdsL API should have less overhead, as it skips the tokenizer. On the other hand, BuilderDsl frequently queries the call stack to track provenance of elements in the parse tree. In common real-world use,
custom application logic and database access are going to bottleneck performance,
rather than Stupidedi.
### Helps developers gain familiarity
Why not a commercial translator?
---
Commercial EDI translators solve a different set of problems. Many focus on translating between EDI and another data format, like XML, CSV, or a relational database. This isn't particularly productive, as you still have to unserialize the data to do anything with it.
What doesn't it solve?
---
It isn't a translator. It doesn't have bells and whistles, like the commercial EDI translators have, so it...
* Doesn't convert to/from XML, CSV, etc
* Doesn't transmit or receive files
* Doesn't do encryption
* Doesn't connect to your database
* Doesn't transmit over a dial-up modem
* Doesn't queue messages for delivery or receipt
* Doesn't generate acknowledgements
* Doesn't have a graphical interface
These features are orthogonal to the problem Stupidedi aims to solve, but they can certainly be implemented with other code taking advantage of Stupidedi.
Alternative libraries
---
Stupidedi is an opinionated library, and maybe you don't agree with it. Here are a few alternative libraries:
* <https://github.com/rjackson/X12>
* <https://github.com/dlabare/ruby-edi>
* <https://github.com/pstuteville/x12>
* <http://www.appdesign.com/x12parser/>
+ <https://github.com/swalberg/x12>
+ <https://github.com/mjpete3/x12>
* <https://github.com/mrcsparker/x12_parser>
* <http://edi4r.rubyforge.org/edi4r/>
* <http://edival.sourceforge.net/>
* <http://www.edidev.com/Examples
---
In addition to these brief examples, see sample code in the `notes` directory and the human-readable Markdown documentation in `doc`.
### Utilities
Pretty print the syntax tree
```
$ ./bin/edi-pp spec/fixtures/X222-HC837/1-good.txt
...
TableVal[Table 3 - Summary](
SegmentVal[SE: Transaction Set Trailer](
Nn.value[ E96: Number of Included Segments](45),
AN.value[ E329: Transaction Set Control Number](0021)))),
SegmentVal[GE: Functional Group Trailer](
Nn.value[ E97: Number of Transaction Sets Included](1),
Nn.value[ E28: Group Control Number](1))),
SegmentVal[IEA: Interchange Control Trailer](
Nn.value[ I16: Number of Included Functional Groups](1),
Nn.value[ I12: Interchange Control Number](905))))
49 segments 49 segments 0.140 seconds
```
Perform validation on a file
```
$ ./bin/edi-ed spec/fixtures/X222-HC837/1-bad.txt
[AK905(file spec/fixtures/X222-HC837/1-bad.txt,
line 16, column 4, is not an allowed value,
ID.value[ E479: Functional Identifier Code](XX)),
IK304(file spec/fixtures/X222-HC837/1-bad.txt,
line 33, column 1,
missing N4 segment, NM1),
IK304(file spec/fixtures/X222-HC837/1-bad.txt,
line 35, column 1,
missing N4 segment, NM1)]
46 segments 0.177 seconds
```
### Generating, Writing
#### X12 Writer
```
require "stupidedi"
# You can customize this to delegate to your own grammar definitions, if needed.
config = [Stupidedi](/gems/stupidedi/Stupidedi "Stupidedi (module)")::[Config](/gems/stupidedi/Stupidedi/Config "Stupidedi::Config (class)").[hipaa](/gems/stupidedi/Stupidedi/Config#hipaa-class_method "Stupidedi::Config.hipaa (method)")
b = [Stupidedi](/gems/stupidedi/Stupidedi "Stupidedi (module)")::[Parser](/gems/stupidedi/Stupidedi/Parser "Stupidedi::Parser (module)")::[BuilderDsl](/gems/stupidedi/Stupidedi/Parser/BuilderDsl "Stupidedi::Parser::BuilderDsl (class)").[build](/gems/stupidedi/Stupidedi/Parser/BuilderDsl#build-class_method "Stupidedi::Parser::BuilderDsl.build (method)")(config)
# These methods perform error checking: number of elements, element types, min/max
# length requirements, conditionally required elements, valid segments, number of
# segment occurrences, number of loop occurrences, etc.
b.ISA "00", nil, "00", nil,
"ZZ", "SUBMITTER ID",
"ZZ", "RECEIVER ID",
"990531", "1230", nil, "00501", "123456789", "1", "T", nil
# The API tracks the current position in the specification (e.g., the current loop,
# table, etc) to ensure well-formedness as each segment is generated.
b.GS "HC", "SENDER ID", "RECEIVER ID", "19990531", "1230", "1", "X", "005010X222"
# The `b.default` value can be used to generate the appropriate value if it can
# be unambigously inferred from the grammar.
b.ST "837", "1234", b.default
# You can use string representations of data or standard Ruby data types, like Time.
b.BHT "0019", "00", "X"*30, "19990531", Time.now.utc, "CH"
b.NM1 b.default, "1", "PREMIER BILLING SERVICE", nil, nil, nil, nil, "46", "12EEER000TY"
b.PER "IC", "<NAME>", "TE", "3056660000"
b.NM1 "40", "2", "<NAME>", nil, nil, nil, nil, "46", "66783JJT"
b.HL "1", nil, "20", "1"
b.NM1 "85", "2", "PREMIER BILLING SERVICE", nil, nil, nil, nil, "XX", "123234560"
b.N3 "1234 SEAWAY ST"
b.N4 "MIAMI", "FL", "331111234"
b.REF "EI", "123667894"
b.PER "IC", b.blank, "TE", "3056661111"
b.NM1 "87", "2"
b.N3 "2345 OCEAN BLVD"
b.N4 "MIAMI", "FL", "33111"
b.HL "2", "1", "22", "0"
b.SBR "S", "18", nil, nil, "12", nil, nil, nil, "MB"
b.NM1 "IL", "1", "BACON", "KEVIN", nil, nil, nil, "MI", "222334444"
b.N3 "236 N MAIN ST"
b.N4 "MIAMI", "FL", "33413"
b.DMG "D8", "19431022", "F"
b.machine.zipper.tap do |z|
# The :component, and :repetition parameters can also be specified as elements
# of the ISA segment, at `b.ISA(...)` above. When generating a document from
# scratch, :segment and :element must be specified -- if you've parsed the doc
# from a file, these params will default to whatever was used in the file, or
# you can override them here.
separators =
[Stupidedi](/gems/stupidedi/Stupidedi "Stupidedi (module)")::[Reader](/gems/stupidedi/Stupidedi/Reader "Stupidedi::Reader (module)")::[Separators](/gems/stupidedi/Stupidedi/Reader/Separators "Stupidedi::Reader::Separators (class)").[build](/gems/stupidedi/Stupidedi/Reader/Separators#build-class_method "Stupidedi::Reader::Separators.build (method)") :segment => "~\n",
:element => "*",
:component => ":",
:repetition => "^"
# You can also serialize any subtree within the document (e.g., everything inside
# some ST..SE transaction set, or a single loop. Here, z.root is the entire tree.
w = [Stupidedi](/gems/stupidedi/Stupidedi "Stupidedi (module)")::[Writer](/gems/stupidedi/Stupidedi/Writer "Stupidedi::Writer (module)")::[Default](/gems/stupidedi/Stupidedi/Writer/Default "Stupidedi::Writer::Default (class)").[new](/gems/stupidedi/Stupidedi/Writer/Default#initialize-instance_method "Stupidedi::Writer::Default#initialize (method)")(z.root, separators)
print w.write()
end
```
#### HTML writer
As shown above `Stupidedi::Writer::Default` will output data encoded in plain x12 format. While `Stupidedi::Writer::Claredi` will output a formatted HTML string.
`Stupidedi::Writer::Claredi#write` operates on `StringIO`.
```
b.machine.zipper.tap do |z|
w = [Stupidedi](/gems/stupidedi/Stupidedi "Stupidedi (module)")::[Writer](/gems/stupidedi/Stupidedi/Writer "Stupidedi::Writer (module)")::[Claredi](/gems/stupidedi/Stupidedi/Writer/Claredi "Stupidedi::Writer::Claredi (class)").[new](/gems/stupidedi/Stupidedi/Writer/Claredi#initialize-instance_method "Stupidedi::Writer::Claredi#initialize (method)")(z.root)
File.open('output.html', 'w') { |f| f.write w.write }
end
```
#### Json (Hash) Writer
Converting the tree to a JSON document is intentionally not included in the library. However this still may be implemented utilizing the stupidedi API.
[Here](https://github.com/irobayna/stupidedi/blob/master/notes/json_writer/json.rb) is one of the possible ways to implement this.
The shown approach allows to define custom traversing logic for the nodes with ability to change hash keys and values to whatever is needed.
Please refer to [this readme](https://github.com/irobayna/stupidedi/blob/master/notes/json_writer/json.MD) and [these nodes implementation](https://github.com/irobayna/stupidedi/blob/master/notes/json_writer/json/) for more information.
### Reading, Traversing
```
require "stupidedi"
config = [Stupidedi](/gems/stupidedi/Stupidedi "Stupidedi (module)")::[Config](/gems/stupidedi/Stupidedi/Config "Stupidedi::Config (class)").[hipaa](/gems/stupidedi/Stupidedi/Config#hipaa-class_method "Stupidedi::Config.hipaa (method)")
parser = [Stupidedi](/gems/stupidedi/Stupidedi "Stupidedi (module)")::[Parser](/gems/stupidedi/Stupidedi/Parser "Stupidedi::Parser (module)").[build](/gems/stupidedi/Stupidedi/Parser#build-class_method "Stupidedi::Parser.build (method)")(config)
input = if RUBY_VERSION > "1.8"
File.open("spec/fixtures/X221-HP835/1-good.txt", :encoding => "ISO-8859-1")
else
File.open("spec/fixtures/X221-HP835/1-good.txt")
end
# Reader.build accepts IO (File), String, and DelegateInput parser, result = parser.read([Stupidedi](/gems/stupidedi/Stupidedi "Stupidedi (module)")::[Reader](/gems/stupidedi/Stupidedi/Reader "Stupidedi::Reader (module)").[build](/gems/stupidedi/Stupidedi/Reader#build-class_method "Stupidedi::Reader.build (method)")(input))
# Report fatal tokenizer failures if result.fatal?
result.explain{|reason| raise reason + " at #{result.position.inspect}" }
end
# Helper function: fetch an element from the current segment def el(m, *ns, &block)
if [Stupidedi](/gems/stupidedi/Stupidedi "Stupidedi (module)")::[Either](/gems/stupidedi/Stupidedi/Either "Stupidedi::Either (class)") === m
m.tap{|m| el(m, *ns, &block) }
else
yield(*ns.map{|n| m.elementn(n).map(&:value).fetch })
end end
# Print some information parser.first
.flatmap{|m| m.find(:GS) }
.flatmap{|m| m.find(:ST) }
.tap do |m|
el(m.find(:N1, "PR"), 2){|e| puts "Payer: #{e}" }
el(m.find(:N1, "PE"), 2){|e| puts "Payee: #{e}" }
end
.flatmap{|m| m.find(:LX) }
.flatmap{|m| m.find(:CLP) }
.flatmap{|m| m.find(:NM1, "QC") }
.tap{|m| el(m, 3, 4){|l,f| puts "Patient: #{l}, #{f}" }}
```
### Testing
```
rake spec
``` |
github.com/markphelps/flipt | go | Go | README
[¶](#section-readme)
---

An open-source, on-prem feature flag solution
---

[](https://github.com/markphelps/flipt/releases)
[](https://github.com/markphelps/flipt/actions)
[](https://github.com/markphelps/flipt/blob/main/LICENSE)
[](https://hub.docker.com/r/markphelps/flipt)
[](https://codecov.io/gh/markphelps/flipt)
[](https://goreportcard.com/report/github.com/markphelps/flipt)
[](https://bestpractices.coreinfrastructure.org/projects/3498)
[](https://github.com/avelino/awesome-go)
[](https://discord.gg/fjPVc5JuyE)
####
[Documentation](https://flipt.io/docs/getting_started/) |
[Features](#features) |
[Values](#values) |
[Integration](#integration) |
[Community](#community)
Flipt is an open source, on-prem feature flag application that allows you to run experiments across services in **your** environment.
Flipt can be deployed within your existing infrastructure so that you don't have to worry about your information being sent to a third party or the latency required to communicate across the internet.
Flipt supports use cases such as:
* Simple on/off feature flags to toggle functionality in your applications
* Rolling out features to a percentage of your customers
* Using advanced segmentation to target and serve users based on custom properties that you define
### Features
* Fast. Written in Go. Optimized for performance
* Stand alone, easy to run and configure
* Ability to create advanced distribution rules to target segments of users
* Native [GRPC](https://grpc.io/) client SDKs to integrate with your existing applications easily
* Powerful REST API
* Modern, mobile friendly 📱 UI and debug console
* Support for multiple databases (Postgres, MySQL, SQLite)
* Data import and export to allow storing your data as code
### Values
* 🔒 **Security** - HTTPS support. No data leaves your servers and you don't have to open your systems to the outside world to communicate with Flipt. It all runs within your existing infrastructure.
* 🚀 **Speed** - Since Flipt is co-located with your existing services, you do not have to communicate across the internet which can add excessive latency and slow down your applications.
* ✅ **Simplicity** - Flipt is a single binary with no external dependencies by default.
* :thumbsup: **Compatibility** - REST, GRPC, MySQL, Postgres, SQLite.. Flipt supports it all.
### Try It
Try the latest version of Flipt out for yourself.
#### Sandbox
[Try Flipt](https://try.flipt.io) in a deployed environment!
**Note:** The database gets cleared **every 30 minutes** in this sandbox environment!
#### Docker

```
❯ docker run --rm -p 8080:8080 -p 9000:9000 -t markphelps/flipt:latest
```
Flipt UI will now be reachable at [http://127.0.0.1:8080/](http://127.0.0.1:8080).
For more permanent methods of running Flipt, see the [Installation](https://flipt.io/docs/installation/) section.
### Integration
Checkout the [integration docs](https://flipt.io/docs/integration/) for more info on how to integrate Flipt into your existing application.
#### REST API
Flipt is equipped with a fully functional REST API. In fact, the Flipt UI is completely backed by this same API. This means that anything that can be done in the Flipt UI can also be done via the REST API.
The [Flipt REST API](https://flipt.io/docs/api/) can also be used with any language that can make HTTP requests.
#### Official GRPC Client Libraries
* [Go](https://github.com/markphelps/flipt-grpc-go)
* [Ruby](https://github.com/markphelps/flipt-grpc-ruby)
#### Third-Party Client Libraries
Client libraries built by awesome people from the Open Source community:
| Library | Language | Author | Desc |
| --- | --- | --- | --- |
| [flipt-grpc-python](https://github.com/getsentry/flipt-grpc-python) | Python | [@getsentry](https://github.com/getsentry) | Python GRPC bindings for Flipt |
| [rflipt](https://github.com/christopherdiehl/rflipt) | React | [@christopherdiehl](https://github.com/christopherdiehl) | Components/example project to control React features backed by Flipt |
| [flipt-php](https://github.com/fetzi/flipt-php) | PHP | [@fetzi](https://github.com/fetzi) | Package for evaluating feature flags via the Flipt REST API using [HTTPlug](http://httplug.io/) |
| [flipt-js](https://github.com/betrybe/flipt-js) | Javascript | [@betrybe](https://github.com/betrybe) | Flipt library for JS that allows rendering components based on Feature Flags 🎉 |
#### Generate Your Own
If a client in your language is not available for download, you can easily generate one yourself using the existing [protobuf definition](https://github.com/markphelps/flipt/raw/main/rpc/flipt/flipt.proto). The [GRPC documentation](https://grpc.io/docs/) has extensive examples on how to generate GRPC clients in each supported language.
### Examples
Check out the [examples](https://github.com/markphelps/flipt/blob/v1.8.3/examples) to see how Flipt works.
Here's a [basic one](https://github.com/markphelps/flipt/tree/main/examples/basic) to get started!
### Licensing
There are currently two types of licenses in place for Flipt:
1. Client License 2. Server License
#### Client License
All of the code required to generate GRPC clients in other languages as well as the existing GRPC Go client are licensed under the [MIT License](https://spdx.org/licenses/MIT.html).
This code exists in the [rpc/](https://github.com/markphelps/flipt/blob/v1.8.3/rpc) directory.
The client code is the code that you would integrate into your applications, which is why a more permissive license is used.
#### Server License
The server code is licensed under the [GPL 3.0 License](https://spdx.org/licenses/GPL-3.0.html).
See [LICENSE](https://github.com/markphelps/flipt/blob/v1.8.3/LICENSE).
### Logos
| [Paradigm](https://www.paradigm.co/) |
| --- |
| [Paradigm](https://www.paradigm.co/) |
| |
| --- |
| [Rokt](https://www.rokt.com/) |
Using Flipt at your company? Open a PR and add your logo here!
### Sponsors
If you use Flipt at your company, please consider [becoming a sponsor](https://github.com/sponsors/markphelps) today.
### Enterprise
Need more features or support using Flipt within your Enterprise?
Please help me prioritize an Enterprise version of Flipt by filling out this [short survey](https://forms.gle/a4UBnv8LADYirA4c9)!
### Community
For help and discussion around Flipt, feature flag best practices, and more, join us on [Discord](https://discord.gg/fjPVc5JuyE).
### Author
| [twitter/mark_a_phelps](https://twitter.com/intent/user?screen_name=mark_a_phelps) |
| --- |
| [<NAME>](https://markphelps.me/) |
### Contributing
I would love your help! Before submitting a PR, please read over the [Contributing](https://github.com/markphelps/flipt/blob/v1.8.3/.github/contributing.md) guide.
No contribution is too small, whether it be bug reports/fixes, feature requests, documentation updates, or anything else that can help drive the project forward.
### Contributors ✨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| [**<NAME>**](http://aaronraff.github.io)[💻](https://github.com/markphelps/flipt/commits?author=aaronraff "Code") | [**<NAME>**](http://twitter.com/rochacon)[💻](https://github.com/markphelps/flipt/commits?author=rochacon "Code") | [**<NAME>**](http://christopherdiehl.github.io)[💻](https://github.com/markphelps/flipt/commits?author=christopherdiehl "Code") | [**<NAME>**](https://www.andrewzallen.com)[📖](https://github.com/markphelps/flipt/commits?author=achew22 "Documentation") | [**<NAME>**](http://sf.khepin.com)[💻](https://github.com/markphelps/flipt/commits?author=khepin "Code") | [**<NAME>**](https://github.com/badboyd)[💻](https://github.com/markphelps/flipt/commits?author=badboyd "Code") | [**<NAME>**](http://twitter.com/jon_perl)[⚠️](https://github.com/markphelps/flipt/commits?author=jperl "Tests") [💻](https://github.com/markphelps/flipt/commits?author=jperl "Code") |
| [**Or Elimelech**](https://or-e.net)[💻](https://github.com/markphelps/flipt/commits?author=vic3lord "Code") | [**giddel**](https://github.com/giddel)[💻](https://github.com/markphelps/flipt/commits?author=giddel "Code") | [**Eduardo**](http://eduar.do)[📖](https://github.com/markphelps/flipt/commits?author=edumucelli "Documentation") [💻](https://github.com/markphelps/flipt/commits?author=edumucelli "Code") | [**<NAME>**](https://github.com/itaischwartz)[💻](https://github.com/markphelps/flipt/commits?author=itaischwartz "Code") | [**<NAME>**](https://bandism.net/)[📖](https://github.com/markphelps/flipt/commits?author=eltociear "Documentation") | [**<NAME>**](https://sagikazarmark.hu)[💻](https://github.com/markphelps/flipt/commits?author=sagikazarmark "Code") | [**<NAME>**](https://github.com/pietdaniel)[💻](https://github.com/markphelps/flipt/commits?author=pietdaniel "Code") |
| [**<NAME>**](https://github.com/amayvs)[💻](https://github.com/markphelps/flipt/commits?author=amayvs "Code") | [**kevin-ip**](https://github.com/kevin-ip)[💻](https://github.com/markphelps/flipt/commits?author=kevin-ip "Code") | [**albertchae**](https://github.com/albertchae)[💻](https://github.com/markphelps/flipt/commits?author=albertchae "Code") |
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!
None |
nnapi-sys | rust | Rust | Crate nnapi_sys
===
FFI Rust bindings for the Android Neural Networks API.
Structs
---
* AHardwareBuffer
* ANeuralNetworksBurst
* ANeuralNetworksCompilation
* ANeuralNetworksDevice
* ANeuralNetworksEvent
* ANeuralNetworksExecution
* ANeuralNetworksMemory
* ANeuralNetworksMemoryDesc
* ANeuralNetworksModel
* ANeuralNetworksOperandTypeANeuralNetworksOperandType describes the type of an operand.
* ANeuralNetworksSymmPerChannelQuantParamsParameters for ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL operand.
* __darwin_pthread_handler_rec
* _opaque_pthread_attr_t
* _opaque_pthread_cond_t
* _opaque_pthread_condattr_t
* _opaque_pthread_mutex_t
* _opaque_pthread_mutexattr_t
* _opaque_pthread_once_t
* _opaque_pthread_rwlock_t
* _opaque_pthread_rwlockattr_t
* _opaque_pthread_t
Enums
---
* OperandCodeOperand types.
* OperationCodeOperation types.
* ResultCodeResult codes.
Constants
---
* ANEURALNETWORKS_BYTE_SIZE_OF_CACHE_TOKEN
* ANEURALNETWORKS_DEVICE_ACCELERATORDedicated accelerator for Machine Learning workloads.
* ANEURALNETWORKS_DEVICE_CPUThe device runs NNAPI models on single or multi-core CPU.
* ANEURALNETWORKS_DEVICE_GPUThe device can run NNAPI models and also accelerate graphics APIs such as OpenGL ES and Vulkan.
* ANEURALNETWORKS_DEVICE_OTHERThe device does not fall into any category below.
* ANEURALNETWORKS_DEVICE_UNKNOWNThe device type cannot be provided.
* ANEURALNETWORKS_FUSED_NONENO fused activation function.
* ANEURALNETWORKS_FUSED_RELUFused ReLU activation function.
* ANEURALNETWORKS_FUSED_RELU1Fused ReLU1 activation function.
* ANEURALNETWORKS_FUSED_RELU6Fused ReLU6 activation function.
* ANEURALNETWORKS_MAX_SIZE_OF_IMMEDIATELY_COPIED_VALUES
* ANEURALNETWORKS_PADDING_SAMESAME padding.
Padding on both ends are the “same”:
padding_to_beginning = total_padding / 2 padding_to_end = (total_padding + 1)/2.
i.e., for even number of padding, padding to both ends are exactly the same; for odd number of padding, padding to the ending is bigger than the padding to the beginning by 1.
* ANEURALNETWORKS_PADDING_VALIDVALID padding.
No padding. When the input size is not evenly divisible by the filter size, the input at the end that could not fill the whole filter tile will simply be ignored.
* ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWERPrefer returning a single answer as fast as possible, even if this causes more power consumption.
* ANEURALNETWORKS_PREFER_LOW_POWERPrefer executing in a way that minimizes battery drain.
This is desirable for compilations that will be executed often.
* ANEURALNETWORKS_PREFER_SUSTAINED_SPEEDPrefer maximizing the throughput of successive frames, for example when processing successive frames coming from the camera.
* DurationCode_ANEURALNETWORKS_DURATION_IN_DRIVER
* DurationCode_ANEURALNETWORKS_DURATION_ON_HARDWARE
* DurationCode_ANEURALNETWORKS_FENCED_DURATION_IN_DRIVER
* DurationCode_ANEURALNETWORKS_FENCED_DURATION_ON_HARDWARE
* INT8_MAX
* INT8_MIN
* INT16_MAX
* INT16_MIN
* INT32_MAX
* INT32_MIN
* INT64_MAX
* INT64_MIN
* INTPTR_MAX
* INTPTR_MIN
* INT_FAST8_MAX
* INT_FAST8_MIN
* INT_FAST16_MAX
* INT_FAST16_MIN
* INT_FAST32_MAX
* INT_FAST32_MIN
* INT_FAST64_MAX
* INT_FAST64_MIN
* INT_LEAST8_MAX
* INT_LEAST8_MIN
* INT_LEAST16_MAX
* INT_LEAST16_MIN
* INT_LEAST32_MAX
* INT_LEAST32_MIN
* INT_LEAST64_MAX
* INT_LEAST64_MIN
* PriorityCode_ANEURALNETWORKS_PRIORITY_DEFAULT
* PriorityCode_ANEURALNETWORKS_PRIORITY_HIGH
* PriorityCode_ANEURALNETWORKS_PRIORITY_LOW
* PriorityCode_ANEURALNETWORKS_PRIORITY_MEDIUM
* RSIZE_MAX
* SIG_ATOMIC_MAX
* SIG_ATOMIC_MIN
* SIZE_MAX
* UINT8_MAX
* UINT16_MAX
* UINT32_MAX
* UINT64_MAX
* UINTPTR_MAX
* UINT_FAST8_MAX
* UINT_FAST16_MAX
* UINT_FAST32_MAX
* UINT_FAST64_MAX
* UINT_LEAST8_MAX
* UINT_LEAST16_MAX
* UINT_LEAST32_MAX
* UINT_LEAST64_MAX
* WINT_MAX
* WINT_MIN
* _DARWIN_FEATURE_64_BIT_INODE
* _DARWIN_FEATURE_ONLY_64_BIT_INODE
* _DARWIN_FEATURE_ONLY_UNIX_CONFORMANCE
* _DARWIN_FEATURE_ONLY_VERS_1050
* _DARWIN_FEATURE_UNIX_CONFORMANCE
* __DARWIN_64_BIT_INO_T
* __DARWIN_C_ANSI
* __DARWIN_C_FULL
* __DARWIN_C_LEVEL
* __DARWIN_NON_CANCELABLE
* __DARWIN_NO_LONG_LONG
* __DARWIN_ONLY_64_BIT_INO_T
* __DARWIN_ONLY_UNIX_CONFORMANCE
* __DARWIN_ONLY_VERS_1050
* __DARWIN_SUF_EXTSN
* __DARWIN_UNIX03
* __DARWIN_VERS_1050
* __PTHREAD_ATTR_SIZE__
* __PTHREAD_CONDATTR_SIZE__
* __PTHREAD_COND_SIZE__
* __PTHREAD_MUTEXATTR_SIZE__
* __PTHREAD_MUTEX_SIZE__
* __PTHREAD_ONCE_SIZE__
* __PTHREAD_RWLOCKATTR_SIZE__
* __PTHREAD_RWLOCK_SIZE__
* __PTHREAD_SIZE__
* __STDC_WANT_LIB_EXT1__
* __WORDSIZE
* __bool_true_false_are_defined
* __has_ptrcheck
* false_
* true_
Functions
---
* ANeuralNetworksBurst_create⚠Create a {@link ANeuralNetworksBurst} to apply the given compilation.
This only creates the burst object. Computation is only performed once
{@link ANeuralNetworksExecution_burstCompute} is invoked with a valid
{@link ANeuralNetworksExecution} and {@link ANeuralNetworksBurst}.
* ANeuralNetworksBurst_free⚠Destroys the burst object.
* ANeuralNetworksCompilation_create⚠Create a {@link ANeuralNetworksCompilation} to compile the given model.
* ANeuralNetworksCompilation_createForDevices⚠Create a {@link ANeuralNetworksCompilation} to compile the given model for a specified set of devices. If more than one device is specified, the compilation will distribute the workload automatically across the devices. The model must be fully supported by the specified set of devices. This means that ANeuralNetworksModel_getSupportedOperationsForDevices() must have returned true for every operation for that model/devices pair.
* ANeuralNetworksCompilation_finish⚠Indicate that we have finished modifying a compilation. Required before calling {@link ANeuralNetworksBurst_create} or
{@link ANeuralNetworksExecution_create}.
* ANeuralNetworksCompilation_free⚠Destroy a compilation.
* ANeuralNetworksCompilation_setCaching⚠Sets the compilation caching signature and the cache directory.
* ANeuralNetworksCompilation_setPreference⚠Sets the execution preference.
* ANeuralNetworksCompilation_setPriority⚠Set the execution priority.
* ANeuralNetworksCompilation_setTimeout⚠Set the maximum expected duration for compiling the model.
* ANeuralNetworksDevice_getFeatureLevel⚠Get the supported NNAPI version of the specified device.
* ANeuralNetworksDevice_getName⚠Get the name of the specified device.
* ANeuralNetworksDevice_getType⚠Get the type of a given device.
* ANeuralNetworksDevice_getVersion⚠Get the version of the driver implementation of the specified device.
* ANeuralNetworksDevice_wait⚠Wait until the device is in a live state.
* ANeuralNetworksEvent_createFromSyncFenceFd⚠Create a {@link ANeuralNetworksEvent} from a sync_fence file descriptor.
* ANeuralNetworksEvent_free⚠Destroys the event.
* ANeuralNetworksEvent_getSyncFenceFd⚠Get sync_fence file descriptor from the event.
* ANeuralNetworksEvent_wait⚠Waits until the execution completes.
* ANeuralNetworksExecution_burstCompute⚠Schedule synchronous evaluation of the execution on a burst object.
* ANeuralNetworksExecution_compute⚠Schedule synchronous evaluation of the execution.
* ANeuralNetworksExecution_create⚠Create a {@link ANeuralNetworksExecution} to apply the given compilation.
This only creates the object. Computation is only performed once
{@link ANeuralNetworksExecution_burstCompute},
{@link ANeuralNetworksExecution_compute},
{@link ANeuralNetworksExecution_startCompute} or
{@link ANeuralNetworksExecution_startComputeWithDependencies} is invoked.
* ANeuralNetworksExecution_free⚠Destroy an execution.
* ANeuralNetworksExecution_getDuration⚠Get the time spent in the specified {@link ANeuralNetworksExecution}, in nanoseconds.
* ANeuralNetworksExecution_getOutputOperandDimensions⚠Get the dimensional information of the specified output operand of the model of the
{@link ANeuralNetworksExecution}. The target output operand cannot be a scalar.
* ANeuralNetworksExecution_getOutputOperandRank⚠Get the dimensional information of the specified output operand of the model of the
{@link ANeuralNetworksExecution}.
* ANeuralNetworksExecution_setInput⚠Associate a user buffer with an input of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the buffer until the execution has completed. Evaluation of the execution will not change the content of the buffer.
* ANeuralNetworksExecution_setInputFromMemory⚠Associate a region of a memory object with an input of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the region until the execution has completed. Evaluation of the execution will not change the content of the region.
* ANeuralNetworksExecution_setLoopTimeout⚠Set the maximum duration of WHILE loops in the specified execution.
* ANeuralNetworksExecution_setMeasureTiming⚠Specifies whether duration of the {@link ANeuralNetworksExecution} is to be measured. Evaluation of the execution must not have been scheduled.
* ANeuralNetworksExecution_setOutput⚠Associate a user buffer with an output of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the buffer until the execution has completed.
* ANeuralNetworksExecution_setOutputFromMemory⚠Associate a region of a memory object with an output of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the region until the execution has completed.
* ANeuralNetworksExecution_setTimeout⚠Set the maximum expected duration of the specified execution.
* ANeuralNetworksExecution_startCompute⚠Schedule asynchronous evaluation of the execution.
* ANeuralNetworksExecution_startComputeWithDependencies⚠Schedule asynchronous evaluation of the execution with dependencies.
* ANeuralNetworksMemoryDesc_addInputRole⚠Specify that a memory object will be playing the role of an input to an execution created from a particular compilation.
* ANeuralNetworksMemoryDesc_addOutputRole⚠Specify that a memory object will be playing the role of an output to an execution created from a particular compilation.
* ANeuralNetworksMemoryDesc_create⚠Create a {@link ANeuralNetworksMemoryDesc} with no properties.
* ANeuralNetworksMemoryDesc_finish⚠Indicate that we have finished modifying a memory descriptor. Required before calling
{@link ANeuralNetworksMemory_createFromDesc}.
* ANeuralNetworksMemoryDesc_free⚠Destroy a memory descriptor.
* ANeuralNetworksMemoryDesc_setDimensions⚠Set the dimensional information of the memory descriptor.
* ANeuralNetworksMemory_copy⚠Copies data from one memory object to another.
* ANeuralNetworksMemory_createFromAHardwareBuffer⚠Creates a shared memory object from an AHardwareBuffer handle.
* ANeuralNetworksMemory_createFromDesc⚠Creates a memory object from a memory descriptor.
* ANeuralNetworksMemory_createFromFd⚠Creates a shared memory object from a file descriptor.
* ANeuralNetworksMemory_free⚠Delete a memory object.
* ANeuralNetworksModel_addOperand⚠Add an operand to a model.
* ANeuralNetworksModel_addOperation⚠Add an operation to a model.
* ANeuralNetworksModel_create⚠Create an empty {@link ANeuralNetworksModel}.
* ANeuralNetworksModel_finish⚠Indicate that we have finished modifying a model. Required before calling {@link ANeuralNetworksCompilation_create} and
{@link ANeuralNetworksCompilation_createForDevices}.
* ANeuralNetworksModel_free⚠Destroy a model.
* ANeuralNetworksModel_getSupportedOperationsForDevices⚠Get the supported operations for a specified set of devices. If multiple devices are selected, the supported operation list is a union of supported operations of all selected devices.
* ANeuralNetworksModel_identifyInputsAndOutputs⚠Specifies which operands will be the model’s inputs and outputs. Every model must have at least one input and one output.
* ANeuralNetworksModel_relaxComputationFloat32toFloat16⚠Specifies whether {@link ANEURALNETWORKS_TENSOR_FLOAT32} is allowed to be calculated with range and/or precision as low as that of the IEEE 754 16-bit floating-point format. By default, {@link ANEURALNETWORKS_TENSOR_FLOAT32}
must be calculated using at least the range and precision of the IEEE 754 32-bit floating-point format.
* ANeuralNetworksModel_setOperandSymmPerChannelQuantParams⚠Sets an operand’s per channel quantization parameters.
* ANeuralNetworksModel_setOperandValue⚠Sets an operand to a constant value.
* ANeuralNetworksModel_setOperandValueFromMemory⚠Sets an operand to a value stored in a memory object.
* ANeuralNetworksModel_setOperandValueFromModel⚠Sets an operand to a value that is a reference to another NNAPI model.
* ANeuralNetworks_getDefaultLoopTimeout⚠Get the default timeout value for WHILE loops.
* ANeuralNetworks_getDevice⚠Get the representation of the specified device.
* ANeuralNetworks_getDeviceCount⚠Get the number of available devices.
* ANeuralNetworks_getMaximumLoopTimeout⚠Get the maximum timeout value for WHILE loops.
* ANeuralNetworks_getRuntimeFeatureLevel⚠
Type Aliases
---
* ANeuralNetworksOperationType
* DeviceTypeCodeDevice types.
* DurationCodeDifferent duration measurements.
* FuseCodeFused activation function types.
* PaddingCodeImplicit padding algorithms.
* PreferenceCodeExecution preferences.
* PriorityCodeRelative execution priority.
* __builtin_va_list
* __darwin_blkcnt_t
* __darwin_blksize_t
* __darwin_clock_t
* __darwin_ct_rune_t
* __darwin_dev_t
* __darwin_fsblkcnt_t
* __darwin_fsfilcnt_t
* __darwin_gid_t
* __darwin_id_t
* __darwin_ino64_t
* __darwin_ino_t
* __darwin_intptr_t
* __darwin_mach_port_name_t
* __darwin_mach_port_t
* __darwin_mbstate_t
* __darwin_mode_t
* __darwin_natural_t
* __darwin_off_t
* __darwin_pid_t
* __darwin_pthread_attr_t
* __darwin_pthread_cond_t
* __darwin_pthread_condattr_t
* __darwin_pthread_key_t
* __darwin_pthread_mutex_t
* __darwin_pthread_mutexattr_t
* __darwin_pthread_once_t
* __darwin_pthread_rwlock_t
* __darwin_pthread_rwlockattr_t
* __darwin_pthread_t
* __darwin_ptrdiff_t
* __darwin_rune_t
* __darwin_sigset_t
* __darwin_size_t
* __darwin_socklen_t
* __darwin_ssize_t
* __darwin_suseconds_t
* __darwin_time_t
* __darwin_uid_t
* __darwin_useconds_t
* __darwin_uuid_string_t
* __darwin_uuid_t
* __darwin_va_list
* __darwin_wchar_t
* __darwin_wint_t
* __int8_t
* __int16_t
* __int32_t
* __int64_t
* __uint8_t
* __uint16_t
* __uint32_t
* __uint64_t
* _bindgen_ty_1For {@link ANeuralNetworksModel_setOperandValue}, values with a length smaller or equal to this will be immediately copied into the model. The size is in bytes.
* _bindgen_ty_2For {@link ANeuralNetworksCompilation_setCaching}, specify the size of the cache token required from the application. The size is in bytes.
* int_fast8_t
* int_fast16_t
* int_fast32_t
* int_fast64_t
* int_least8_t
* int_least16_t
* int_least32_t
* int_least64_t
* intmax_t
* max_align_t
* register_t
* syscall_arg_t
* u_int8_t
* u_int16_t
* u_int32_t
* u_int64_t
* uint_fast8_t
* uint_fast16_t
* uint_fast32_t
* uint_fast64_t
* uint_least8_t
* uint_least16_t
* uint_least32_t
* uint_least64_t
* uintmax_t
* user_addr_t
* user_long_t
* user_off_t
* user_size_t
* user_ssize_t
* user_time_t
* user_ulong_t
* wchar_t
Unions
---
* __mbstate_t
Crate nnapi_sys
===
FFI Rust bindings for the Android Neural Networks API.
Structs
---
* AHardwareBuffer
* ANeuralNetworksBurst
* ANeuralNetworksCompilation
* ANeuralNetworksDevice
* ANeuralNetworksEvent
* ANeuralNetworksExecution
* ANeuralNetworksMemory
* ANeuralNetworksMemoryDesc
* ANeuralNetworksModel
* ANeuralNetworksOperandTypeANeuralNetworksOperandType describes the type of an operand.
* ANeuralNetworksSymmPerChannelQuantParamsParameters for ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL operand.
* __darwin_pthread_handler_rec
* _opaque_pthread_attr_t
* _opaque_pthread_cond_t
* _opaque_pthread_condattr_t
* _opaque_pthread_mutex_t
* _opaque_pthread_mutexattr_t
* _opaque_pthread_once_t
* _opaque_pthread_rwlock_t
* _opaque_pthread_rwlockattr_t
* _opaque_pthread_t
Enums
---
* OperandCodeOperand types.
* OperationCodeOperation types.
* ResultCodeResult codes.
Constants
---
* ANEURALNETWORKS_BYTE_SIZE_OF_CACHE_TOKEN
* ANEURALNETWORKS_DEVICE_ACCELERATORDedicated accelerator for Machine Learning workloads.
* ANEURALNETWORKS_DEVICE_CPUThe device runs NNAPI models on single or multi-core CPU.
* ANEURALNETWORKS_DEVICE_GPUThe device can run NNAPI models and also accelerate graphics APIs such as OpenGL ES and Vulkan.
* ANEURALNETWORKS_DEVICE_OTHERThe device does not fall into any category below.
* ANEURALNETWORKS_DEVICE_UNKNOWNThe device type cannot be provided.
* ANEURALNETWORKS_FUSED_NONENO fused activation function.
* ANEURALNETWORKS_FUSED_RELUFused ReLU activation function.
* ANEURALNETWORKS_FUSED_RELU1Fused ReLU1 activation function.
* ANEURALNETWORKS_FUSED_RELU6Fused ReLU6 activation function.
* ANEURALNETWORKS_MAX_SIZE_OF_IMMEDIATELY_COPIED_VALUES
* ANEURALNETWORKS_PADDING_SAMESAME padding.
Padding on both ends are the “same”:
padding_to_beginning = total_padding / 2 padding_to_end = (total_padding + 1)/2.
i.e., for even number of padding, padding to both ends are exactly the same; for odd number of padding, padding to the ending is bigger than the padding to the beginning by 1.
* ANEURALNETWORKS_PADDING_VALIDVALID padding.
No padding. When the input size is not evenly divisible by the filter size, the input at the end that could not fill the whole filter tile will simply be ignored.
* ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWERPrefer returning a single answer as fast as possible, even if this causes more power consumption.
* ANEURALNETWORKS_PREFER_LOW_POWERPrefer executing in a way that minimizes battery drain.
This is desirable for compilations that will be executed often.
* ANEURALNETWORKS_PREFER_SUSTAINED_SPEEDPrefer maximizing the throughput of successive frames, for example when processing successive frames coming from the camera.
* DurationCode_ANEURALNETWORKS_DURATION_IN_DRIVER
* DurationCode_ANEURALNETWORKS_DURATION_ON_HARDWARE
* DurationCode_ANEURALNETWORKS_FENCED_DURATION_IN_DRIVER
* DurationCode_ANEURALNETWORKS_FENCED_DURATION_ON_HARDWARE
* INT8_MAX
* INT8_MIN
* INT16_MAX
* INT16_MIN
* INT32_MAX
* INT32_MIN
* INT64_MAX
* INT64_MIN
* INTPTR_MAX
* INTPTR_MIN
* INT_FAST8_MAX
* INT_FAST8_MIN
* INT_FAST16_MAX
* INT_FAST16_MIN
* INT_FAST32_MAX
* INT_FAST32_MIN
* INT_FAST64_MAX
* INT_FAST64_MIN
* INT_LEAST8_MAX
* INT_LEAST8_MIN
* INT_LEAST16_MAX
* INT_LEAST16_MIN
* INT_LEAST32_MAX
* INT_LEAST32_MIN
* INT_LEAST64_MAX
* INT_LEAST64_MIN
* PriorityCode_ANEURALNETWORKS_PRIORITY_DEFAULT
* PriorityCode_ANEURALNETWORKS_PRIORITY_HIGH
* PriorityCode_ANEURALNETWORKS_PRIORITY_LOW
* PriorityCode_ANEURALNETWORKS_PRIORITY_MEDIUM
* RSIZE_MAX
* SIG_ATOMIC_MAX
* SIG_ATOMIC_MIN
* SIZE_MAX
* UINT8_MAX
* UINT16_MAX
* UINT32_MAX
* UINT64_MAX
* UINTPTR_MAX
* UINT_FAST8_MAX
* UINT_FAST16_MAX
* UINT_FAST32_MAX
* UINT_FAST64_MAX
* UINT_LEAST8_MAX
* UINT_LEAST16_MAX
* UINT_LEAST32_MAX
* UINT_LEAST64_MAX
* WINT_MAX
* WINT_MIN
* _DARWIN_FEATURE_64_BIT_INODE
* _DARWIN_FEATURE_ONLY_64_BIT_INODE
* _DARWIN_FEATURE_ONLY_UNIX_CONFORMANCE
* _DARWIN_FEATURE_ONLY_VERS_1050
* _DARWIN_FEATURE_UNIX_CONFORMANCE
* __DARWIN_64_BIT_INO_T
* __DARWIN_C_ANSI
* __DARWIN_C_FULL
* __DARWIN_C_LEVEL
* __DARWIN_NON_CANCELABLE
* __DARWIN_NO_LONG_LONG
* __DARWIN_ONLY_64_BIT_INO_T
* __DARWIN_ONLY_UNIX_CONFORMANCE
* __DARWIN_ONLY_VERS_1050
* __DARWIN_SUF_EXTSN
* __DARWIN_UNIX03
* __DARWIN_VERS_1050
* __PTHREAD_ATTR_SIZE__
* __PTHREAD_CONDATTR_SIZE__
* __PTHREAD_COND_SIZE__
* __PTHREAD_MUTEXATTR_SIZE__
* __PTHREAD_MUTEX_SIZE__
* __PTHREAD_ONCE_SIZE__
* __PTHREAD_RWLOCKATTR_SIZE__
* __PTHREAD_RWLOCK_SIZE__
* __PTHREAD_SIZE__
* __STDC_WANT_LIB_EXT1__
* __WORDSIZE
* __bool_true_false_are_defined
* __has_ptrcheck
* false_
* true_
Functions
---
* ANeuralNetworksBurst_create⚠Create a {@link ANeuralNetworksBurst} to apply the given compilation.
This only creates the burst object. Computation is only performed once
{@link ANeuralNetworksExecution_burstCompute} is invoked with a valid
{@link ANeuralNetworksExecution} and {@link ANeuralNetworksBurst}.
* ANeuralNetworksBurst_free⚠Destroys the burst object.
* ANeuralNetworksCompilation_create⚠Create a {@link ANeuralNetworksCompilation} to compile the given model.
* ANeuralNetworksCompilation_createForDevices⚠Create a {@link ANeuralNetworksCompilation} to compile the given model for a specified set of devices. If more than one device is specified, the compilation will distribute the workload automatically across the devices. The model must be fully supported by the specified set of devices. This means that ANeuralNetworksModel_getSupportedOperationsForDevices() must have returned true for every operation for that model/devices pair.
* ANeuralNetworksCompilation_finish⚠Indicate that we have finished modifying a compilation. Required before calling {@link ANeuralNetworksBurst_create} or
{@link ANeuralNetworksExecution_create}.
* ANeuralNetworksCompilation_free⚠Destroy a compilation.
* ANeuralNetworksCompilation_setCaching⚠Sets the compilation caching signature and the cache directory.
* ANeuralNetworksCompilation_setPreference⚠Sets the execution preference.
* ANeuralNetworksCompilation_setPriority⚠Set the execution priority.
* ANeuralNetworksCompilation_setTimeout⚠Set the maximum expected duration for compiling the model.
* ANeuralNetworksDevice_getFeatureLevel⚠Get the supported NNAPI version of the specified device.
* ANeuralNetworksDevice_getName⚠Get the name of the specified device.
* ANeuralNetworksDevice_getType⚠Get the type of a given device.
* ANeuralNetworksDevice_getVersion⚠Get the version of the driver implementation of the specified device.
* ANeuralNetworksDevice_wait⚠Wait until the device is in a live state.
* ANeuralNetworksEvent_createFromSyncFenceFd⚠Create a {@link ANeuralNetworksEvent} from a sync_fence file descriptor.
* ANeuralNetworksEvent_free⚠Destroys the event.
* ANeuralNetworksEvent_getSyncFenceFd⚠Get sync_fence file descriptor from the event.
* ANeuralNetworksEvent_wait⚠Waits until the execution completes.
* ANeuralNetworksExecution_burstCompute⚠Schedule synchronous evaluation of the execution on a burst object.
* ANeuralNetworksExecution_compute⚠Schedule synchronous evaluation of the execution.
* ANeuralNetworksExecution_create⚠Create a {@link ANeuralNetworksExecution} to apply the given compilation.
This only creates the object. Computation is only performed once
{@link ANeuralNetworksExecution_burstCompute},
{@link ANeuralNetworksExecution_compute},
{@link ANeuralNetworksExecution_startCompute} or
{@link ANeuralNetworksExecution_startComputeWithDependencies} is invoked.
* ANeuralNetworksExecution_free⚠Destroy an execution.
* ANeuralNetworksExecution_getDuration⚠Get the time spent in the specified {@link ANeuralNetworksExecution}, in nanoseconds.
* ANeuralNetworksExecution_getOutputOperandDimensions⚠Get the dimensional information of the specified output operand of the model of the
{@link ANeuralNetworksExecution}. The target output operand cannot be a scalar.
* ANeuralNetworksExecution_getOutputOperandRank⚠Get the dimensional information of the specified output operand of the model of the
{@link ANeuralNetworksExecution}.
* ANeuralNetworksExecution_setInput⚠Associate a user buffer with an input of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the buffer until the execution has completed. Evaluation of the execution will not change the content of the buffer.
* ANeuralNetworksExecution_setInputFromMemory⚠Associate a region of a memory object with an input of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the region until the execution has completed. Evaluation of the execution will not change the content of the region.
* ANeuralNetworksExecution_setLoopTimeout⚠Set the maximum duration of WHILE loops in the specified execution.
* ANeuralNetworksExecution_setMeasureTiming⚠Specifies whether duration of the {@link ANeuralNetworksExecution} is to be measured. Evaluation of the execution must not have been scheduled.
* ANeuralNetworksExecution_setOutput⚠Associate a user buffer with an output of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the buffer until the execution has completed.
* ANeuralNetworksExecution_setOutputFromMemory⚠Associate a region of a memory object with an output of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the region until the execution has completed.
* ANeuralNetworksExecution_setTimeout⚠Set the maximum expected duration of the specified execution.
* ANeuralNetworksExecution_startCompute⚠Schedule asynchronous evaluation of the execution.
* ANeuralNetworksExecution_startComputeWithDependencies⚠Schedule asynchronous evaluation of the execution with dependencies.
* ANeuralNetworksMemoryDesc_addInputRole⚠Specify that a memory object will be playing the role of an input to an execution created from a particular compilation.
* ANeuralNetworksMemoryDesc_addOutputRole⚠Specify that a memory object will be playing the role of an output to an execution created from a particular compilation.
* ANeuralNetworksMemoryDesc_create⚠Create a {@link ANeuralNetworksMemoryDesc} with no properties.
* ANeuralNetworksMemoryDesc_finish⚠Indicate that we have finished modifying a memory descriptor. Required before calling
{@link ANeuralNetworksMemory_createFromDesc}.
* ANeuralNetworksMemoryDesc_free⚠Destroy a memory descriptor.
* ANeuralNetworksMemoryDesc_setDimensions⚠Set the dimensional information of the memory descriptor.
* ANeuralNetworksMemory_copy⚠Copies data from one memory object to another.
* ANeuralNetworksMemory_createFromAHardwareBuffer⚠Creates a shared memory object from an AHardwareBuffer handle.
* ANeuralNetworksMemory_createFromDesc⚠Creates a memory object from a memory descriptor.
* ANeuralNetworksMemory_createFromFd⚠Creates a shared memory object from a file descriptor.
* ANeuralNetworksMemory_free⚠Delete a memory object.
* ANeuralNetworksModel_addOperand⚠Add an operand to a model.
* ANeuralNetworksModel_addOperation⚠Add an operation to a model.
* ANeuralNetworksModel_create⚠Create an empty {@link ANeuralNetworksModel}.
* ANeuralNetworksModel_finish⚠Indicate that we have finished modifying a model. Required before calling {@link ANeuralNetworksCompilation_create} and
{@link ANeuralNetworksCompilation_createForDevices}.
* ANeuralNetworksModel_free⚠Destroy a model.
* ANeuralNetworksModel_getSupportedOperationsForDevices⚠Get the supported operations for a specified set of devices. If multiple devices are selected, the supported operation list is a union of supported operations of all selected devices.
* ANeuralNetworksModel_identifyInputsAndOutputs⚠Specifies which operands will be the model’s inputs and outputs. Every model must have at least one input and one output.
* ANeuralNetworksModel_relaxComputationFloat32toFloat16⚠Specifies whether {@link ANEURALNETWORKS_TENSOR_FLOAT32} is allowed to be calculated with range and/or precision as low as that of the IEEE 754 16-bit floating-point format. By default, {@link ANEURALNETWORKS_TENSOR_FLOAT32}
must be calculated using at least the range and precision of the IEEE 754 32-bit floating-point format.
* ANeuralNetworksModel_setOperandSymmPerChannelQuantParams⚠Sets an operand’s per channel quantization parameters.
* ANeuralNetworksModel_setOperandValue⚠Sets an operand to a constant value.
* ANeuralNetworksModel_setOperandValueFromMemory⚠Sets an operand to a value stored in a memory object.
* ANeuralNetworksModel_setOperandValueFromModel⚠Sets an operand to a value that is a reference to another NNAPI model.
* ANeuralNetworks_getDefaultLoopTimeout⚠Get the default timeout value for WHILE loops.
* ANeuralNetworks_getDevice⚠Get the representation of the specified device.
* ANeuralNetworks_getDeviceCount⚠Get the number of available devices.
* ANeuralNetworks_getMaximumLoopTimeout⚠Get the maximum timeout value for WHILE loops.
* ANeuralNetworks_getRuntimeFeatureLevel⚠
Type Aliases
---
* ANeuralNetworksOperationType
* DeviceTypeCodeDevice types.
* DurationCodeDifferent duration measurements.
* FuseCodeFused activation function types.
* PaddingCodeImplicit padding algorithms.
* PreferenceCodeExecution preferences.
* PriorityCodeRelative execution priority.
* __builtin_va_list
* __darwin_blkcnt_t
* __darwin_blksize_t
* __darwin_clock_t
* __darwin_ct_rune_t
* __darwin_dev_t
* __darwin_fsblkcnt_t
* __darwin_fsfilcnt_t
* __darwin_gid_t
* __darwin_id_t
* __darwin_ino64_t
* __darwin_ino_t
* __darwin_intptr_t
* __darwin_mach_port_name_t
* __darwin_mach_port_t
* __darwin_mbstate_t
* __darwin_mode_t
* __darwin_natural_t
* __darwin_off_t
* __darwin_pid_t
* __darwin_pthread_attr_t
* __darwin_pthread_cond_t
* __darwin_pthread_condattr_t
* __darwin_pthread_key_t
* __darwin_pthread_mutex_t
* __darwin_pthread_mutexattr_t
* __darwin_pthread_once_t
* __darwin_pthread_rwlock_t
* __darwin_pthread_rwlockattr_t
* __darwin_pthread_t
* __darwin_ptrdiff_t
* __darwin_rune_t
* __darwin_sigset_t
* __darwin_size_t
* __darwin_socklen_t
* __darwin_ssize_t
* __darwin_suseconds_t
* __darwin_time_t
* __darwin_uid_t
* __darwin_useconds_t
* __darwin_uuid_string_t
* __darwin_uuid_t
* __darwin_va_list
* __darwin_wchar_t
* __darwin_wint_t
* __int8_t
* __int16_t
* __int32_t
* __int64_t
* __uint8_t
* __uint16_t
* __uint32_t
* __uint64_t
* _bindgen_ty_1For {@link ANeuralNetworksModel_setOperandValue}, values with a length smaller or equal to this will be immediately copied into the model. The size is in bytes.
* _bindgen_ty_2For {@link ANeuralNetworksCompilation_setCaching}, specify the size of the cache token required from the application. The size is in bytes.
* int_fast8_t
* int_fast16_t
* int_fast32_t
* int_fast64_t
* int_least8_t
* int_least16_t
* int_least32_t
* int_least64_t
* intmax_t
* max_align_t
* register_t
* syscall_arg_t
* u_int8_t
* u_int16_t
* u_int32_t
* u_int64_t
* uint_fast8_t
* uint_fast16_t
* uint_fast32_t
* uint_fast64_t
* uint_least8_t
* uint_least16_t
* uint_least32_t
* uint_least64_t
* uintmax_t
* user_addr_t
* user_long_t
* user_off_t
* user_size_t
* user_ssize_t
* user_time_t
* user_ulong_t
* wchar_t
Unions
---
* __mbstate_t
Struct nnapi_sys::AHardwareBuffer
===
```
#[repr(C)]pub struct AHardwareBuffer { /* private fields */ }
```
Trait Implementations
---
### impl Clone for AHardwareBuffer
#### fn clone(&self) -> AHardwareBuffer
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for AHardwareBuffer
### impl Send for AHardwareBuffer
### impl Sync for AHardwareBuffer
### impl Unpin for AHardwareBuffer
### impl UnwindSafe for AHardwareBuffer
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::ANeuralNetworksBurst
===
```
#[repr(C)]pub struct ANeuralNetworksBurst { /* private fields */ }
```
Trait Implementations
---
### impl Clone for ANeuralNetworksBurst
#### fn clone(&self) -> ANeuralNetworksBurst
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for ANeuralNetworksBurst
### impl Send for ANeuralNetworksBurst
### impl Sync for ANeuralNetworksBurst
### impl Unpin for ANeuralNetworksBurst
### impl UnwindSafe for ANeuralNetworksBurst
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::ANeuralNetworksCompilation
===
```
#[repr(C)]pub struct ANeuralNetworksCompilation { /* private fields */ }
```
Trait Implementations
---
### impl Clone for ANeuralNetworksCompilation
#### fn clone(&self) -> ANeuralNetworksCompilation
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for ANeuralNetworksCompilation
### impl Send for ANeuralNetworksCompilation
### impl Sync for ANeuralNetworksCompilation
### impl Unpin for ANeuralNetworksCompilation
### impl UnwindSafe for ANeuralNetworksCompilation
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::ANeuralNetworksDevice
===
```
#[repr(C)]pub struct ANeuralNetworksDevice { /* private fields */ }
```
Trait Implementations
---
### impl Clone for ANeuralNetworksDevice
#### fn clone(&self) -> ANeuralNetworksDevice
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for ANeuralNetworksDevice
### impl Send for ANeuralNetworksDevice
### impl Sync for ANeuralNetworksDevice
### impl Unpin for ANeuralNetworksDevice
### impl UnwindSafe for ANeuralNetworksDevice
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::ANeuralNetworksEvent
===
```
#[repr(C)]pub struct ANeuralNetworksEvent { /* private fields */ }
```
Trait Implementations
---
### impl Clone for ANeuralNetworksEvent
#### fn clone(&self) -> ANeuralNetworksEvent
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for ANeuralNetworksEvent
### impl Send for ANeuralNetworksEvent
### impl Sync for ANeuralNetworksEvent
### impl Unpin for ANeuralNetworksEvent
### impl UnwindSafe for ANeuralNetworksEvent
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::ANeuralNetworksExecution
===
```
#[repr(C)]pub struct ANeuralNetworksExecution { /* private fields */ }
```
Trait Implementations
---
### impl Clone for ANeuralNetworksExecution
#### fn clone(&self) -> ANeuralNetworksExecution
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for ANeuralNetworksExecution
### impl Send for ANeuralNetworksExecution
### impl Sync for ANeuralNetworksExecution
### impl Unpin for ANeuralNetworksExecution
### impl UnwindSafe for ANeuralNetworksExecution
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::ANeuralNetworksMemory
===
```
#[repr(C)]pub struct ANeuralNetworksMemory { /* private fields */ }
```
Trait Implementations
---
### impl Clone for ANeuralNetworksMemory
#### fn clone(&self) -> ANeuralNetworksMemory
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for ANeuralNetworksMemory
### impl Send for ANeuralNetworksMemory
### impl Sync for ANeuralNetworksMemory
### impl Unpin for ANeuralNetworksMemory
### impl UnwindSafe for ANeuralNetworksMemory
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::ANeuralNetworksMemoryDesc
===
```
#[repr(C)]pub struct ANeuralNetworksMemoryDesc { /* private fields */ }
```
Trait Implementations
---
### impl Clone for ANeuralNetworksMemoryDesc
#### fn clone(&self) -> ANeuralNetworksMemoryDesc
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for ANeuralNetworksMemoryDesc
### impl Send for ANeuralNetworksMemoryDesc
### impl Sync for ANeuralNetworksMemoryDesc
### impl Unpin for ANeuralNetworksMemoryDesc
### impl UnwindSafe for ANeuralNetworksMemoryDesc
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::ANeuralNetworksModel
===
```
#[repr(C)]pub struct ANeuralNetworksModel { /* private fields */ }
```
Trait Implementations
---
### impl Clone for ANeuralNetworksModel
#### fn clone(&self) -> ANeuralNetworksModel
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for ANeuralNetworksModel
### impl Send for ANeuralNetworksModel
### impl Sync for ANeuralNetworksModel
### impl Unpin for ANeuralNetworksModel
### impl UnwindSafe for ANeuralNetworksModel
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::ANeuralNetworksOperandType
===
```
#[repr(C)]pub struct ANeuralNetworksOperandType {
pub type_: i32,
pub dimensionCount: u32,
pub dimensions: *constu32,
pub scale: f32,
pub zeroPoint: i32,
}
```
ANeuralNetworksOperandType describes the type of an operand.
This structure is used to describe both scalars and tensors.
A tensor operand type with all dimensions specified is “fully specified”. Whenever possible (i.e., whenever the dimensions are known at model construction time), a tensor operand type should be
(but is not required to be) fully specified, in order to enable the best possible performance.
If a tensor operand’s type is not fully specified, the dimensions of the operand are deduced from the operand types and values of the operation for which that operand is an output or from the corresponding
{@link ANEURALNETWORKS_IF} or {@link ANEURALNETWORKS_WHILE} operation input operand type in the case of referenced model input operands.
In the following situations, a tensor operand type must be fully specified:* The operand has a constant value, set by
{@link ANeuralNetworksModel_setOperandValue} (with a
non-nullptr buffer) or
{@link ANeuralNetworksModel_setOperandValueFromMemory}.
* The operand is a model input (see
{@link ANeuralNetworksModel_identifyInputsAndOutputs}) of the main
model within a compilation. A fully specified tensor operand type
must either be provided to {@link ANeuralNetworksModel_addOperand};
or it must be provided to the corresponding
{@link ANeuralNetworksExecution_setInput}, or
{@link ANeuralNetworksExecution_setInputFromMemory}.
EXCEPTION: If the input is optional and omitted
(by passing nullptr for buffer to
{@link ANeuralNetworksExecution_setInput}) then it need
not have a fully specified tensor operand type.
* The operand is a model output (see
{@link ANeuralNetworksModel_identifyInputsAndOutputs}) of the main
model within a compilation and is to be used with {@link
ANeuralNetworksExecution_startComputeWithDependencies}.
A fully specified tensor operand type must either be provided
to {@link ANeuralNetworksModel_addOperand}; or it must be
provided to the corresponding
{@link ANeuralNetworksExecution_setOutput}, or
{@link ANeuralNetworksExecution_setOutputFromMemory}.
A tensor operand type of specified rank but some number of unspecified dimensions is represented by setting dimensionCount to the rank and each unspecified dimension to 0.
Available since API level 27.
Starting at API level 29, a tensor operand type of unspecified rank is represented by setting dimensionCount to 0 and dimensions to NULL (just as if it were a scalar operand type).
Fields
---
`type_: i32`The data type, e.g ANEURALNETWORKS_FLOAT32.
`dimensionCount: u32`The number of dimensions (rank).
Must be 0 for scalars.
`dimensions: *constu32`The dimensions of the tensor.
Must be nullptr for scalars.
`scale: f32`The quantization scale.
Must be 0 when not applicable to an operand type.
See {@link OperandCode}.
`zeroPoint: i32`The quantization zero point.
Must be 0 when not applicable to an operand type.
See {@link OperandCode}.
Trait Implementations
---
### impl Clone for ANeuralNetworksOperandType
#### fn clone(&self) -> ANeuralNetworksOperandType
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for ANeuralNetworksOperandType
### impl !Send for ANeuralNetworksOperandType
### impl !Sync for ANeuralNetworksOperandType
### impl Unpin for ANeuralNetworksOperandType
### impl UnwindSafe for ANeuralNetworksOperandType
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::ANeuralNetworksSymmPerChannelQuantParams
===
```
#[repr(C)]pub struct ANeuralNetworksSymmPerChannelQuantParams {
pub channelDim: u32,
pub scaleCount: u32,
pub scales: *constf32,
}
```
Parameters for ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL operand.
Fields
---
`channelDim: u32``scaleCount: u32`The size of the scale array. Should be equal to dimension[channelDim] of the Operand.
`scales: *constf32`The array of scaling values for each channel. Each value must be greater than zero.
Trait Implementations
---
### impl Clone for ANeuralNetworksSymmPerChannelQuantParams
#### fn clone(&self) -> ANeuralNetworksSymmPerChannelQuantParams
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for ANeuralNetworksSymmPerChannelQuantParams
### impl !Send for ANeuralNetworksSymmPerChannelQuantParams
### impl !Sync for ANeuralNetworksSymmPerChannelQuantParams
### impl Unpin for ANeuralNetworksSymmPerChannelQuantParams
### impl UnwindSafe for ANeuralNetworksSymmPerChannelQuantParams
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::__darwin_pthread_handler_rec
===
```
#[repr(C)]pub struct __darwin_pthread_handler_rec {
pub __routine: Option<unsafe extern "C" fn(arg1: *mutc_void)>,
pub __arg: *mutc_void,
pub __next: *mut__darwin_pthread_handler_rec,
}
```
Fields
---
`__routine: Option<unsafe extern "C" fn(arg1: *mutc_void)>``__arg: *mutc_void``__next: *mut__darwin_pthread_handler_rec`Trait Implementations
---
### impl Clone for __darwin_pthread_handler_rec
#### fn clone(&self) -> __darwin_pthread_handler_rec
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for __darwin_pthread_handler_rec
### impl !Send for __darwin_pthread_handler_rec
### impl !Sync for __darwin_pthread_handler_rec
### impl Unpin for __darwin_pthread_handler_rec
### impl UnwindSafe for __darwin_pthread_handler_rec
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::_opaque_pthread_attr_t
===
```
#[repr(C)]pub struct _opaque_pthread_attr_t {
pub __sig: c_long,
pub __opaque: [c_char; 56],
}
```
Fields
---
`__sig: c_long``__opaque: [c_char; 56]`Trait Implementations
---
### impl Clone for _opaque_pthread_attr_t
#### fn clone(&self) -> _opaque_pthread_attr_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for _opaque_pthread_attr_t
### impl Send for _opaque_pthread_attr_t
### impl Sync for _opaque_pthread_attr_t
### impl Unpin for _opaque_pthread_attr_t
### impl UnwindSafe for _opaque_pthread_attr_t
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::_opaque_pthread_cond_t
===
```
#[repr(C)]pub struct _opaque_pthread_cond_t {
pub __sig: c_long,
pub __opaque: [c_char; 40],
}
```
Fields
---
`__sig: c_long``__opaque: [c_char; 40]`Trait Implementations
---
### impl Clone for _opaque_pthread_cond_t
#### fn clone(&self) -> _opaque_pthread_cond_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for _opaque_pthread_cond_t
### impl Send for _opaque_pthread_cond_t
### impl Sync for _opaque_pthread_cond_t
### impl Unpin for _opaque_pthread_cond_t
### impl UnwindSafe for _opaque_pthread_cond_t
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::_opaque_pthread_condattr_t
===
```
#[repr(C)]pub struct _opaque_pthread_condattr_t {
pub __sig: c_long,
pub __opaque: [c_char; 8],
}
```
Fields
---
`__sig: c_long``__opaque: [c_char; 8]`Trait Implementations
---
### impl Clone for _opaque_pthread_condattr_t
#### fn clone(&self) -> _opaque_pthread_condattr_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for _opaque_pthread_condattr_t
### impl Send for _opaque_pthread_condattr_t
### impl Sync for _opaque_pthread_condattr_t
### impl Unpin for _opaque_pthread_condattr_t
### impl UnwindSafe for _opaque_pthread_condattr_t
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::_opaque_pthread_mutex_t
===
```
#[repr(C)]pub struct _opaque_pthread_mutex_t {
pub __sig: c_long,
pub __opaque: [c_char; 56],
}
```
Fields
---
`__sig: c_long``__opaque: [c_char; 56]`Trait Implementations
---
### impl Clone for _opaque_pthread_mutex_t
#### fn clone(&self) -> _opaque_pthread_mutex_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for _opaque_pthread_mutex_t
### impl Send for _opaque_pthread_mutex_t
### impl Sync for _opaque_pthread_mutex_t
### impl Unpin for _opaque_pthread_mutex_t
### impl UnwindSafe for _opaque_pthread_mutex_t
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::_opaque_pthread_mutexattr_t
===
```
#[repr(C)]pub struct _opaque_pthread_mutexattr_t {
pub __sig: c_long,
pub __opaque: [c_char; 8],
}
```
Fields
---
`__sig: c_long``__opaque: [c_char; 8]`Trait Implementations
---
### impl Clone for _opaque_pthread_mutexattr_t
#### fn clone(&self) -> _opaque_pthread_mutexattr_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for _opaque_pthread_mutexattr_t
### impl Send for _opaque_pthread_mutexattr_t
### impl Sync for _opaque_pthread_mutexattr_t
### impl Unpin for _opaque_pthread_mutexattr_t
### impl UnwindSafe for _opaque_pthread_mutexattr_t
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::_opaque_pthread_once_t
===
```
#[repr(C)]pub struct _opaque_pthread_once_t {
pub __sig: c_long,
pub __opaque: [c_char; 8],
}
```
Fields
---
`__sig: c_long``__opaque: [c_char; 8]`Trait Implementations
---
### impl Clone for _opaque_pthread_once_t
#### fn clone(&self) -> _opaque_pthread_once_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for _opaque_pthread_once_t
### impl Send for _opaque_pthread_once_t
### impl Sync for _opaque_pthread_once_t
### impl Unpin for _opaque_pthread_once_t
### impl UnwindSafe for _opaque_pthread_once_t
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::_opaque_pthread_rwlock_t
===
```
#[repr(C)]pub struct _opaque_pthread_rwlock_t {
pub __sig: c_long,
pub __opaque: [c_char; 192],
}
```
Fields
---
`__sig: c_long``__opaque: [c_char; 192]`Trait Implementations
---
### impl Clone for _opaque_pthread_rwlock_t
#### fn clone(&self) -> _opaque_pthread_rwlock_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for _opaque_pthread_rwlock_t
### impl Send for _opaque_pthread_rwlock_t
### impl Sync for _opaque_pthread_rwlock_t
### impl Unpin for _opaque_pthread_rwlock_t
### impl UnwindSafe for _opaque_pthread_rwlock_t
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::_opaque_pthread_rwlockattr_t
===
```
#[repr(C)]pub struct _opaque_pthread_rwlockattr_t {
pub __sig: c_long,
pub __opaque: [c_char; 16],
}
```
Fields
---
`__sig: c_long``__opaque: [c_char; 16]`Trait Implementations
---
### impl Clone for _opaque_pthread_rwlockattr_t
#### fn clone(&self) -> _opaque_pthread_rwlockattr_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for _opaque_pthread_rwlockattr_t
### impl Send for _opaque_pthread_rwlockattr_t
### impl Sync for _opaque_pthread_rwlockattr_t
### impl Unpin for _opaque_pthread_rwlockattr_t
### impl UnwindSafe for _opaque_pthread_rwlockattr_t
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct nnapi_sys::_opaque_pthread_t
===
```
#[repr(C)]pub struct _opaque_pthread_t {
pub __sig: c_long,
pub __cleanup_stack: *mut__darwin_pthread_handler_rec,
pub __opaque: [c_char; 8176],
}
```
Fields
---
`__sig: c_long``__cleanup_stack: *mut__darwin_pthread_handler_rec``__opaque: [c_char; 8176]`Trait Implementations
---
### impl Clone for _opaque_pthread_t
#### fn clone(&self) -> _opaque_pthread_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for _opaque_pthread_t
### impl !Send for _opaque_pthread_t
### impl !Sync for _opaque_pthread_t
### impl Unpin for _opaque_pthread_t
### impl UnwindSafe for _opaque_pthread_t
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum nnapi_sys::OperandCode
===
```
#[repr(C)]pub enum OperandCode {
ANEURALNETWORKS_FLOAT32,
ANEURALNETWORKS_INT32,
ANEURALNETWORKS_UINT32,
ANEURALNETWORKS_TENSOR_FLOAT32,
ANEURALNETWORKS_TENSOR_INT32,
ANEURALNETWORKS_TENSOR_QUANT8_ASYMM,
ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL,
ANEURALNETWORKS_TENSOR_QUANT8_SYMM,
ANEURALNETWORKS_TENSOR_FLOAT16,
ANEURALNETWORKS_TENSOR_BOOL8,
ANEURALNETWORKS_TENSOR_QUANT16_ASYMM,
ANEURALNETWORKS_TENSOR_QUANT16_SYMM,
ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED,
ANEURALNETWORKS_MODEL,
}
```
Operand types.
The type of an operand in a model.
Types prefaced with ANEURALNETWORKS_TENSOR_* must be used for tensor data (i.e., tensors with at least one dimension). Types not prefaced by ANEURALNETWORKS_TENSOR_* represent scalar values and must have no dimensions.
Although we define many types, most operators accept just a few types. Most used are {@link ANEURALNETWORKS_TENSOR_FLOAT32},
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
and {@link ANEURALNETWORKS_INT32}.
Available since API level 27.
Variants
---
### ANEURALNETWORKS_FLOAT32
A 32 bit floating point scalar value.
### ANEURALNETWORKS_INT32
A signed 32 bit integer scalar value.
### ANEURALNETWORKS_UINT32
An unsigned 32 bit integer scalar value.
### ANEURALNETWORKS_TENSOR_FLOAT32
A tensor of 32 bit floating point values.
### ANEURALNETWORKS_TENSOR_INT32
A tensor of 32 bit signed integers.
### ANEURALNETWORKS_TENSOR_QUANT8_ASYMM
A tensor of 8 bit unsigned integers.
### ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL
A tensor of 8 bit signed integers.
### ANEURALNETWORKS_TENSOR_QUANT8_SYMM
A tensor of 8 bit signed integers.
### ANEURALNETWORKS_TENSOR_FLOAT16
A tensor of 16 bit floating point values.
### ANEURALNETWORKS_TENSOR_BOOL8
A tensor of 8 bit boolean values.
### ANEURALNETWORKS_TENSOR_QUANT16_ASYMM
A tensor of 16 bit unsigned integers.
### ANEURALNETWORKS_TENSOR_QUANT16_SYMM
A tensor of 16 bit signed integers.
### ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED
A tensor of 8 bit signed integers.
### ANEURALNETWORKS_MODEL
A reference to a model.
Auto Trait Implementations
---
### impl RefUnwindSafe for OperandCode
### impl Send for OperandCode
### impl Sync for OperandCode
### impl Unpin for OperandCode
### impl UnwindSafe for OperandCode
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum nnapi_sys::OperationCode
===
```
pub enum OperationCode {
ANEURALNETWORKS_ADD,
ANEURALNETWORKS_AVERAGE_POOL_2D,
ANEURALNETWORKS_CONCATENATION,
ANEURALNETWORKS_CONV_2D,
ANEURALNETWORKS_DEPTHWISE_CONV_2D,
ANEURALNETWORKS_DEPTH_TO_SPACE,
ANEURALNETWORKS_DEQUANTIZE,
ANEURALNETWORKS_EMBEDDING_LOOKUP,
ANEURALNETWORKS_FLOOR,
ANEURALNETWORKS_FULLY_CONNECTED,
ANEURALNETWORKS_HASHTABLE_LOOKUP,
ANEURALNETWORKS_L2_NORMALIZATION,
ANEURALNETWORKS_L2_POOL_2D,
ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION,
ANEURALNETWORKS_LOGISTIC,
ANEURALNETWORKS_LSH_PROJECTION,
ANEURALNETWORKS_LSTM,
ANEURALNETWORKS_MAX_POOL_2D,
ANEURALNETWORKS_MUL,
ANEURALNETWORKS_RELU,
ANEURALNETWORKS_RELU1,
ANEURALNETWORKS_RELU6,
ANEURALNETWORKS_RESHAPE,
ANEURALNETWORKS_RESIZE_BILINEAR,
ANEURALNETWORKS_RNN,
ANEURALNETWORKS_SOFTMAX,
ANEURALNETWORKS_SPACE_TO_DEPTH,
ANEURALNETWORKS_SVDF,
ANEURALNETWORKS_TANH,
ANEURALNETWORKS_BATCH_TO_SPACE_ND,
ANEURALNETWORKS_DIV,
ANEURALNETWORKS_MEAN,
ANEURALNETWORKS_PAD,
ANEURALNETWORKS_SPACE_TO_BATCH_ND,
ANEURALNETWORKS_SQUEEZE,
ANEURALNETWORKS_STRIDED_SLICE,
ANEURALNETWORKS_SUB,
ANEURALNETWORKS_TRANSPOSE,
ANEURALNETWORKS_ABS,
ANEURALNETWORKS_ARGMAX,
ANEURALNETWORKS_ARGMIN,
ANEURALNETWORKS_AXIS_ALIGNED_BBOX_TRANSFORM,
ANEURALNETWORKS_BIDIRECTIONAL_SEQUENCE_LSTM,
ANEURALNETWORKS_BIDIRECTIONAL_SEQUENCE_RNN,
ANEURALNETWORKS_BOX_WITH_NMS_LIMIT,
ANEURALNETWORKS_CAST,
ANEURALNETWORKS_CHANNEL_SHUFFLE,
ANEURALNETWORKS_DETECTION_POSTPROCESSING,
ANEURALNETWORKS_EQUAL,
ANEURALNETWORKS_EXP,
ANEURALNETWORKS_EXPAND_DIMS,
ANEURALNETWORKS_GATHER,
ANEURALNETWORKS_GENERATE_PROPOSALS,
ANEURALNETWORKS_GREATER,
ANEURALNETWORKS_GREATER_EQUAL,
ANEURALNETWORKS_GROUPED_CONV_2D,
ANEURALNETWORKS_HEATMAP_MAX_KEYPOINT,
ANEURALNETWORKS_INSTANCE_NORMALIZATION,
ANEURALNETWORKS_LESS,
ANEURALNETWORKS_LESS_EQUAL,
ANEURALNETWORKS_LOG,
ANEURALNETWORKS_LOGICAL_AND,
ANEURALNETWORKS_LOGICAL_NOT,
ANEURALNETWORKS_LOGICAL_OR,
ANEURALNETWORKS_LOG_SOFTMAX,
ANEURALNETWORKS_MAXIMUM,
ANEURALNETWORKS_MINIMUM,
ANEURALNETWORKS_NEG,
ANEURALNETWORKS_NOT_EQUAL,
ANEURALNETWORKS_PAD_V2,
ANEURALNETWORKS_POW,
ANEURALNETWORKS_PRELU,
ANEURALNETWORKS_QUANTIZE,
ANEURALNETWORKS_QUANTIZED_16BIT_LSTM,
ANEURALNETWORKS_RANDOM_MULTINOMIAL,
ANEURALNETWORKS_REDUCE_ALL,
ANEURALNETWORKS_REDUCE_ANY,
ANEURALNETWORKS_REDUCE_MAX,
ANEURALNETWORKS_REDUCE_MIN,
ANEURALNETWORKS_REDUCE_PROD,
ANEURALNETWORKS_REDUCE_SUM,
ANEURALNETWORKS_ROI_ALIGN,
ANEURALNETWORKS_ROI_POOLING,
ANEURALNETWORKS_RSQRT,
ANEURALNETWORKS_SELECT,
ANEURALNETWORKS_SIN,
ANEURALNETWORKS_SLICE,
ANEURALNETWORKS_SPLIT,
ANEURALNETWORKS_SQRT,
ANEURALNETWORKS_TILE,
ANEURALNETWORKS_TOPK_V2,
ANEURALNETWORKS_TRANSPOSE_CONV_2D,
ANEURALNETWORKS_UNIDIRECTIONAL_SEQUENCE_LSTM,
ANEURALNETWORKS_UNIDIRECTIONAL_SEQUENCE_RNN,
ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR,
ANEURALNETWORKS_QUANTIZED_LSTM,
ANEURALNETWORKS_IF,
ANEURALNETWORKS_WHILE,
ANEURALNETWORKS_ELU,
ANEURALNETWORKS_HARD_SWISH,
ANEURALNETWORKS_FILL,
ANEURALNETWORKS_RANK,
ANEURALNETWORKS_BATCH_MATMUL,
None,
}
```
Operation types.
The type of an operation in a model.
Available since API level 27.
Variants
---
### ANEURALNETWORKS_ADD
Adds two tensors, element-wise.
Takes two input tensors of identical {@link OperandCode} and compatible dimensions. The output is the sum of both input tensors, optionally modified by an activation function.
Two dimensions are compatible when:
1. they are equal, or 2. one of them is 1
The size of the output is the maximum size along each dimension of the input operands. It starts with the trailing dimensions, and works its way forward.
Example:
```
input1.dimension = {4, 1, 2}
input2.dimension = {5, 4, 3, 1}
output.dimension = {5, 4, 3, 2}
```
Since API level 29, generic zero-sized input tensor is supported. Zero dimension is only compatible with 0 or 1. The size of the output dimension is zero if either of corresponding input dimension is zero.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
* {@link ANEURALNETWORKS_TENSOR_INT32} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: A tensor.
* 1: A tensor of the same {@link OperandCode}, and compatible dimensions as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scales and zeroPoint can be different from input0 scale and zeroPoint.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
For a {@link ANEURALNETWORKS_TENSOR_INT32} tensor,
the {@link FuseCode} must be “NONE”.
Outputs:
* 0: The sum, a tensor of the same {@link OperandCode} as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint can be different from inputs’ scale and zeroPoint.
Available since API level 27.
### ANEURALNETWORKS_AVERAGE_POOL_2D
Performs a 2-D average pooling operation.
The output dimensions are functions of the filter dimensions, stride, and padding.
The values in the output tensor are computed as:
```
output[b, i, j, channel] =
sum_{di, dj}(
input[b, strides[1] * i + di, strides[2] * j + dj, channel]
) / sum(1)
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
NCHW is supported since API level 29.
Both explicit padding and implicit padding are supported.
Inputs (explicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the left, in the ‘width’ dimension.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the right, in the ‘width’ dimension.
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the top, in the ‘height’ dimension.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the bottom, in the ‘height’ dimension.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter width.
* 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter height.
* 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 10: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
Inputs (implicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit padding scheme, has to be one of the
{@link PaddingCode} values.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter width.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter height.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 7: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
Outputs:
* 0: The output 4-D tensor, of shape
[batches, out_height, out_width, depth].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 27.
### ANEURALNETWORKS_CONCATENATION
Concatenates the input tensors along the given dimension.
The input tensors must have identical {@link OperandCode} and the same dimensions except the dimension along the concatenation axis.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
(full support since API level 29, see the input section)
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0 ~ n-1: The list of n input tensors, of shape
[D0, D1, …, Daxis(i), …, Dm].
Before API level 29, all input tensors of
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
must have the same scale and zeroPoint as the output tensor.
Input tensors of
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}
are allowed to have different scale and zeroPoint.
Since API level 29, zero-sized tensors are supported.
* n: An {@link ANEURALNETWORKS_INT32} scalar, specifying the concatenation axis.
Outputs:
* 0: The output, a tensor of the same {@link OperandCode} as the input tensors. The output shape is [D0, D1, …, sum(Daxis(i)), …, Dm].
Since API level 29, for a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor,
the scale and zeroPoint values can be different from input tensors. Before API level 29 they have to be the same as for the input tensors.
Available since API level 27.
### ANEURALNETWORKS_CONV_2D
Performs a 2-D convolution operation.
The CONV_2D op sweeps a 2-D filter that can mix channels together over a batch of images, applying the filter to each window of each image of the appropriate size.
The output dimensions are functions of the filter dimensions, stride, and padding.
The values in the output tensor are computed as:
```
output[b, i, j, channel] =
sum_{di, dj, k} (
input[b, strides[1] * i + di, strides[2] * j + dj, k] *
filter[channel, di, dj, k]
) + bias[channel]
```
Supported tensor {@link OperandCode} configurations:
* 32 bit floating point:
* + {@link ANEURALNETWORKS_TENSOR_FLOAT32} for input, filter, output, and bias.
* Quantized:
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, filter, and output.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to
* + input.scale * filter.scale).
Available since API level 29:
* 16 bit floating point:
* + {@link ANEURALNETWORKS_TENSOR_FLOAT16} for input, filter, output, and bias.
* Quantized with symmetric per channel quantization for the filter:
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, and output.
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0,
* + each value scaling is separate and equal to input.scale * filter.scales[channel]).
Available since API level 30:
* Quantized signed (since API level 30):
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, filter, and output.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to
* + input.scale * filter.scale).
* Quantized signed with filter symmetric per channel quantization (since API level 30):
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, and output.
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0,
* + each value scaling is separate and equal to input.scale * filter.scales[channel]).
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
NCHW is supported since API level 29.
Both explicit padding and implicit padding are supported.
Inputs (explicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth_in],
specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: A 4-D tensor, of shape
[depth_out, filter_height, filter_width, depth_in], specifying the filter.
For tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}
the channel dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim)
must be set to 0.
* 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32}
or {@link ANEURALNETWORKS_TENSOR_FLOAT16} the bias must be of the same type.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale == input_scale * filter_scale.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale of 0. The actual scale of each value ‘i’ is equal to bias_scale[i] = input_scale * filter_scale[i].
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the left, in the ‘width’ dimension.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the right, in the ‘width’ dimension.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the top, in the ‘height’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the bottom, in the ‘height’ dimension.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 10: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
* 11: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation factor for width. Defaults to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on width dimension. If this input is set,
input 12 (dilation factor for height) must be specified as well.
Available since API level 29.
* 12: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation factor for height. Defaults to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on height dimension. If this input is set,
input 11 (dilation factor for width) must be specified as well.
Available since API level 29.
Inputs (implicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth_in],
specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: A 4-D tensor, of shape
[depth_out, filter_height, filter_width, depth_in], specifying the filter.
For tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}
the channel dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim)
must be set to 0.
* 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32}
or {@link ANEURALNETWORKS_TENSOR_FLOAT16} the bias must be of the same type.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale == input_scale * filter_scale.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale of 0. The actual scale of each value ‘i’ is equal to bias_scale[i] = input_scale * filter_scale[i].
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit padding scheme, has to be one of the
{@link PaddingCode} values.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 7: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
* 8: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation factor for width. Defaults to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on width dimension. If this input is set,
input 9 (dilation factor for height) must be specified as well.
Available since API level 29.
* 9: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation factor for height. Defaults to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on height dimension. If this input is set,
input 8 (dilation factor for width) must be specified as well.
Available since API level 29.
Outputs:
* 0: The output 4-D tensor, of shape
[batches, out_height, out_width, depth_out].
Before API level 29, for output tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
the following condition must be satisfied: output_scale > input_scale * filter_scale
Available since API level 27.
### ANEURALNETWORKS_DEPTHWISE_CONV_2D
Performs a depthwise 2-D convolution operation.
Given an input tensor of shape [batches, height, width, depth_in] and a filter tensor of shape [1, filter_height, filter_width, depth_out]
containing depth_out convolutional filters of depth 1, DEPTHWISE_CONV applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together.
The output has depth_out = depth_in * depth_multiplier channels.
The output dimensions are functions of the filter dimensions, stride, and padding.
The values in the output tensor are computed as:
```
output[b, i, j, k * channel_multiplier + q] =
sum_{di, dj} (
input[b, strides[1] * i + di, strides[2] * j + dj, k] *
filter[1, di, dj, k * channel_multiplier + q]
) + bias[k * channel_multiplier + q]
```
Supported tensor {@link OperandCode} configurations:
* 32 bit floating point:
* + {@link ANEURALNETWORKS_TENSOR_FLOAT32} for input, filter, output, and bias.
* Quantized:
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, filter, and output.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to
* + input.scale * filter.scale).
Available since API level 29:
* 16 bit floating point:
* + {@link ANEURALNETWORKS_TENSOR_FLOAT16} for input, filter, output, and bias.
* Quantized with symmetric per channel quantization for the filter:
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, and output.
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0,
* + each value scaling is separate and equal to input.scale * filter.scales[channel]).
Available since API level 30:
* Quantized signed (since API level 30):
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, filter, and output.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to
* + input.scale * filter.scale).
* Quantized signed with filter symmetric per channel quantization (since API level 30):
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, and output.
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0,
* + each value scaling is separate and equal to input.scale * filter.scales[channel]).
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
NCHW is supported since API level 29.
Both explicit padding and implicit padding are supported.
Inputs (explicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth_in],
specifying the input.
* 1: A 4-D tensor, of shape [1, filter_height, filter_width, depth_out],
specifying the filter.
For tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}
the channel dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim)
must be set to 3.
* 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32}
or {@link ANEURALNETWORKS_TENSOR_FLOAT16} the bias must be of the same type.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale == input_scale * filter_scale.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale of 0. The actual scale of each value ‘i’ is equal to bias_scale[i] = input_scale * filter_scale[i].
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the left, in the ‘width’ dimension.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the right, in the ‘width’ dimension.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the top, in the ‘height’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the bottom, in the ‘height’ dimension.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 9: An {@link ANEURALNETWORKS_INT32} scalar, specifying the depthwise multiplier.
* 10: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 11: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
* 12: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation factor for width. Defaults to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on width dimension. If this input is set,
input 13 (dilation factor for height) must be specified as well.
Available since API level 29.
* 13: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation factor for height. Defaults to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on height dimension. If this input is set,
input 12 (dilation factor for width) must be specified as well.
Available since API level 29.
Inputs (implicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth_in],
specifying the input.
* 1: A 4-D tensor, of shape [1, filter_height, filter_width, depth_out],
specifying the filter.
* 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32}
or {@link ANEURALNETWORKS_TENSOR_FLOAT16} the bias must be of the same type.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale == input_scale * filter_scale.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale of 0. The actual scale of each value ‘i’ is equal to bias_scale[i] = input_scale * filter_scale[i].
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit padding scheme, has to be one of the
{@link PaddingCode} values.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the depthwise multiplier.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 8: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
* 9: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation factor for width. Defaults to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on width dimension. If this input is set,
input 10 (dilation factor for height) must be specified as well.
Available since API level 29.
* 10: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation factor for height. Defaults to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on height dimension. If this input is set,
input 9 (dilation factor for width) must be specified as well.
Available since API level 29.
Outputs:
* 0: The output 4-D tensor, of shape
[batches, out_height, out_width, depth_out]. Before API level 29, for output tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
the following condition must be satisfied:
output_scale > input_scale * filter_scale
Available since API level 27.
### ANEURALNETWORKS_DEPTH_TO_SPACE
Rearranges data from depth into blocks of spatial data.
More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. The value block_size indicates the input block size and how the data is moved.
Chunks of data of size block_size * block_size from depth are rearranged into non-overlapping blocks of size block_size x block_size.
The width of the output tensor is input_depth * block_size, whereas the height is input_height * block_size. The depth of the input tensor must be divisible by block_size * block_size
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
NCHW is supported since API level 29.
Inputs:
* 0: A 4-D tensor, of shape [batches, height, width, depth_in],
specifying the input.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the block_size.
block_size must be >=1 and block_size * block_size must be a divisor of the input depth.
* 2: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
Outputs:
* 0: The output 4-D tensor, of shape [batch, height*block_size,
width*block_size, depth/(block_size*block_size)].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 27.
### ANEURALNETWORKS_DEQUANTIZE
Dequantizes the input tensor.
The formula is:
```
output = (input - zeroPoint) * scale.
```
Supported input tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported output tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}.
Supported tensor rank: up to 4
Inputs:
* 0: A tensor.
Since API level 29, this tensor may be zero-sized.
Outputs:
* 0: A tensor with the same shape as input0.
Available since API level 27.
### ANEURALNETWORKS_EMBEDDING_LOOKUP
Looks up sub-tensors in the input tensor.
This operator takes for input a tensor of values (Values) and a one-dimensional tensor of selection indices (Lookups).
The output tensor is the concatenation of sub-tensors of Values as selected by Lookups.
Think of Values as being sliced along its first dimension:
The entries in Lookups select which slices are concatenated together to create the output tensor.
For example, if Values has shape of [40, 200, 300] and Lookups has shape of [3], all three values found in Lookups are expected to be between 0 and 39. The resulting tensor must have shape of [3, 200, 300].
If a value in Lookups is out of bounds, the operation must fail and an error must be reported.
Supported value tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 30)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported value tensor rank: from 2
Inputs:
* 0: Lookups. A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}.
The values are indices into the first dimension of Values.
* 1: Values. An n-D tensor, where n >= 2, from which sub-tensors are extracted.
Output:
* 0: A n-D tensor with the same rank and shape as the Values tensor, except for the first dimension which has the same size as Lookups’ only dimension.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input1.
Available since API level 27.
### ANEURALNETWORKS_FLOOR
Computes element-wise floor() on the input tensor.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: up to 4
Inputs:
* 0: A tensor.
Outputs:
* 0: The output tensor, of the same {@link OperandCode} and dimensions as the input tensor.
Available since API level 27.
### ANEURALNETWORKS_FULLY_CONNECTED
Denotes a fully (densely) connected layer, which connects all elements in the input tensor with each element in the output tensor.
This layer implements the operation:
```
outputs = activation(inputs * weights’ + bias)
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4.
Inputs:
* 0: A tensor of at least rank 2, specifying the input. If rank is greater than 2, then it gets flattened to a 2-D Tensor. The
(flattened) 2-D Tensor is reshaped (if necessary) to
[batch_size, input_size], where “input_size” corresponds to the number of inputs to the layer, matching the second dimension of weights, and “batch_size” is calculated by dividing the number of elements by “input_size”.
Since API level 29, zero batch_size is supported for this tensor.
* 1: A 2-D tensor, specifying the weights, of shape
[num_units, input_size], where “num_units” corresponds to the number of output nodes.
* 2: A 1-D tensor, of shape [num_units], specifying the bias. For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the bias should also be of {@link ANEURALNETWORKS_TENSOR_FLOAT32}.
For input tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32},
with zeroPoint of 0 and bias_scale == input_scale * filter_scale.
* 3: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
Outputs:
* 0: The output tensor, of shape [batch_size, num_units]. Before API level 29, for output tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, the following condition must be satisfied: output_scale > input_scale * filter_scale.
Available since API level 27.
### ANEURALNETWORKS_HASHTABLE_LOOKUP
Looks up sub-tensors in the input tensor using a key-value map.
This operator takes for input a tensor of values (Values),
a one-dimensional tensor of selection values (Lookups) and a one-dimensional tensor that maps these values to Values indexes. The output tensor is the concatenation of sub-tensors of Values as selected by Lookups via Keys.
Think of Values as being sliced along its outer-most dimension.
The output is a concatenation of selected slices, with one slice for each entry of Lookups. The slice selected is the one at the same index as the Maps entry that matches the value in Lookups.
For a hit, the corresponding sub-tensor of Values is included in the Output tensor. For a miss, the corresponding sub-tensor in Output must have zero values.
For example, if Values has shape of [40, 200, 300],
Keys should have a shape of [40]. If Lookups tensor has shape of [3], three slices are being concatenated, so the resulting tensor must have the shape of [3, 200, 300]. If the first entry in Lookups has the value 123456, that value must be located in Keys tensor.
If the sixth entry of Keys contains 123456, the sixth slice of Values must be selected. If no entry in Keys has 123456, a slice of zeroes must be concatenated.
Supported value tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
Supported value tensor rank: from 2
Inputs:
* 0: Lookups. A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor with shape [ k ].
* 1: Keys. A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor with shape
[ n ]; Keys and Values pair represent a map, i.e., the ith element in Keys (Keys[i]) is the key to select the ith sub-tensor in Values
(Values[i]), where 0 <= i <= n-1. Keys tensor *MUST* be sorted in ascending order.
* 2: Values. A tensor with shape of [ n, … ]; i.e., the first dimension must be n.
Outputs:
* 0: Output. A tensor with shape [ k …].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor,
the scale and zeroPoint must be the same as input2.
* 1: Hits. A boolean tensor with shape [ k ] indicates whether the lookup hits (True) or not (False).
Stored as {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} with offset 0 and scale 1.0f.
A non-zero byte represents True, a hit. A zero indicates otherwise.
Available since API level 27.
### ANEURALNETWORKS_L2_NORMALIZATION
Applies L2 normalization along the axis dimension.
The values in the output tensor are computed as:
```
output[batch, row, col, channel] =
input[batch, row, col, channel] /
sqrt(sum_{c} pow(input[batch, row, col, c], 2))
```
By default the axis dimension is the last dimension of the input tensor.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4 Tensors with rank less than 4 are only supported since API level 29.
Inputs:
* 0: An n-D tensor, specifying the tensor to be normalized.
* 1: An optional {@link ANEURALNETWORKS_INT32} scalar, default to -1,
specifying the dimension normalization would be performed on.
Negative index is used to specify axis from the end (e.g. -1 for the last axis). Must be in the range [-n, n).
Available since API level 29.
Outputs:
* 0: A tensor of the same {@link OperandCode} and same shape as input0.
For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
the scale must be 1.f / 128 and the zeroPoint must be 128.
For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the scale must be 1.f / 128 and the zeroPoint must be 0.
NOTE: Before API level 30, if the elements along an axis are all zeros,
the result is undefined. Since API level 30, if the elements along an axis are all zeros, the result is logical zero.
Available since API level 27.
### ANEURALNETWORKS_L2_POOL_2D
Performs an 2-D L2 pooling operation.
The output dimensions are functions of the filter dimensions, stride, and padding.
The values in the output tensor are computed as:
```
output[b, i, j, c] =
sqrt(sum_{di, dj} pow(input[b, strides[1] * i + di, strides[2] * j + dj, c], 2) /
sum(1))
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
NCHW is supported since API level 29.
Both explicit padding and implicit padding are supported.
Inputs (explicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the left, in the ‘width’ dimension.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the right, in the ‘width’ dimension.
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the top, in the ‘height’ dimension.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the bottom, in the ‘height’ dimension.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter width.
* 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter height.
* 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 10: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
Inputs (implicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit padding scheme, has to be one of the
{@link PaddingCode} values.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter width.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter height.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 7: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
Outputs:
* 0: The output 4-D tensor, of shape
[batches, out_height, out_width, depth].
Available since API level 27.
### ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION
Applies Local Response Normalization along the depth dimension.
The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius.
The output is calculated using this formula:
```
sqr_sum[a, b, c, d] = sum(
pow(input[a, b, c, d - depth_radius : d + depth_radius + 1], 2))
output = input / pow((bias + alpha * sqr_sum), beta)
```
For input tensor with rank less than 4, independently normalizes each 1-D slice along specified dimension.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: up to 4 Tensors with rank less than 4 are only supported since API level 29.
Inputs:
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the radius of the normalization window.
* 2: A scalar, specifying the bias, must not be zero.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias value must be of {@link ANEURALNETWORKS_FLOAT16}.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the bias value must be of {@link ANEURALNETWORKS_FLOAT32}.
* 3: A scalar, specifying the scale factor, alpha.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the alpha value must be of {@link ANEURALNETWORKS_FLOAT16}.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the alpha value must be of {@link ANEURALNETWORKS_FLOAT32}.
* 4: A scalar, specifying the exponent, beta.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the beta value must be of {@link ANEURALNETWORKS_FLOAT16}.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the beta value must be of {@link ANEURALNETWORKS_FLOAT32}.
* 5: An optional {@link ANEURALNETWORKS_INT32} scalar, default to -1,
specifying the dimension normalization would be performed on.
Negative index is used to specify axis from the end (e.g. -1 for the last axis). Must be in the range [-n, n).
Available since API level 29.
Outputs:
* 0: The output tensor of same shape as input0.
Available since API level 27.
### ANEURALNETWORKS_LOGISTIC
Computes sigmoid activation on the input tensor element-wise.
The output is calculated using this formula:
```
output = 1 / (1 + exp(-input))
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4.
Inputs:
* 0: A tensor, specifying the input.
Since API level 29, this tensor may be zero-sized.
Outputs:
* 0: The output tensor of same shape as input0.
For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
the scale must be 1.f / 256 and the zeroPoint must be 0.
For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the scale must be 1.f / 256 and the zeroPoint must be -128.
Available since API level 27.
### ANEURALNETWORKS_LSH_PROJECTION
Projects an input to a bit vector via locality senstive hashing.
Supported input tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
Supported input tensor rank: from 1
Inputs:
* 0: Hash functions. Dim.size == 2, DataType: Float.
Tensor[0].Dim[0]: Number of hash functions.
Tensor[0].Dim[1]: Number of projected output bits generated by each hash function.
If the projection type is Sparse:
Tensor[0].Dim[1] + ceil(log2(Tensor[0].Dim[0])) <= 32
* 1: Input. Dim.size >= 1, no restriction on DataType.
* 2: Weight. Optional. Dim.size == 1, DataType: Float.
If not set, each input element is considered to have the same weight of 1.0.
Tensor[1].Dim[0] == Tensor[2].Dim[0]
* 3: Type:
Sparse:
Value LSHProjectionType_SPARSE(=3) (since API level 29).
Computed bit vector is considered to be sparse.
Each output element is an int32 made up of multiple bits computed from hash functions.
```
NOTE: To avoid collisions across hash functions, an offset value
of k * (1 << Tensor[0].Dim[1]) will be added to each signature,
where k is the index of the hash function.
Value LSHProjectionType_SPARSE_DEPRECATED(=1).
Legacy behavior that does not include the offset value.
Dense:
Value LSHProjectionType_DENSE(=2).
Computed bit vector is considered to be dense. Each output
element represents a bit and can take the value of either
0 or 1.
```
Outputs:
* 0: If the projection type is Sparse:
Output.Dim == { Tensor[0].Dim[0] }
A tensor of int32 that represents hash signatures.
If the projection type is Dense:
Output.Dim == { Tensor[0].Dim[0] * Tensor[0].Dim[1] }
A flattened tensor that represents projected bit vectors.
Available since API level 27.
The offset value for sparse projections was added in API level 29.
### ANEURALNETWORKS_LSTM
Performs a single time step in a Long Short-Term Memory (LSTM) layer
The LSTM operation is described by the following equations.
\f{eqnarray*}{
i_t =& \sigma(W_{xi}x_t+W_{hi}h_{t-1}+W_{ci}C_{t-1}+b_i) & \
f_t =& \sigma(W_{xf}x_t+W_{hf}h_{t-1}+W_{cf}C_{t-1}+b_f) & \
C_t =& clip(f_t \odot C_{t-1} + i_t \odot g(W_{xc}x_t+W_{hc}h_{t-1}+b_c),\ t_{cell}) & \
o_t =& \sigma(W_{xo}x_t+W_{ho}h_{t-1}+W_{co}C_t+b_o) & \
& & \
& clip(W_{proj}(o_t \odot g(C_t))+b_{proj},\ t_{proj})
& if\ there\ is\ a\ projection; \
h_t =& & \
& o_t \odot g(C_t) & otherwise. \
\f}
Where:
* \f$x_t\f$ is the input,
* \f$i_t\f$ is the input gate,
* \f$f_t\f$ is the forget gate,
* \f$C_t\f$ is the cell state,
* \f$o_t\f$ is the output,
* \f$h_t\f$ is the output state,
* \f$\sigma\f$ is the logistic sigmoid function,
* \f$g\f$ is the cell input and cell output activation function, usually
\f$tahn\f$,
* \f$W_{xi}\f$ is the input-to-input weight matrix,
* \f$W_{hi}\f$ is the recurrent to input weight matrix,
* \f$W_{ci}\f$ is the cell-to-input weight matrix,
* \f$b_i\f$ is the input gate bias,
* \f$W_{xf}\f$ is the input-to-forget weight matrix,
* \f$W_{hf}\f$ is the recurrent-to-forget weight matrix,
* \f$W_{cf}\f$ is the cell-to-forget weight matrix,
* \f$b_f\f$ is the forget gate bias,
* \f$W_{xc}\f$ is the input-to-cell weight matrix,
* \f$W_{hc}\f$ is the recurrent-to-cell weight matrix,
* \f$b_c\f$ is the cell bias,
* \f$W_{xo}\f$ is the input-to-output weight matrix,
* \f$W_{ho}\f$ is the recurrent-to-output weight matrix,
* \f$W_{co}\f$ is the cell-to-output weight matrix,
* \f$b_o\f$ is the output gate bias,
* \f$W_{proj}\f$ is the projection weight matrix,
* \f$b_{proj}\f$ is the projection bias,
* \f$t_{cell}\f$ is the threshold for clipping the cell state, and
* \f$t_{proj}\f$ is the threshold for clipping the projected output.
* \f$\odot\f$ is the Hadamard product that takes two matrices and produces another matrix, each element of which is the product of the corresponding elements of the input matrices.
Since API level 29 LSTM supports layer normalization.
In case layer normalization is used, the inputs to internal activation functions (sigmoid and \f$g\f$) are normalized, rescaled and recentered following an approach from section 3.1 from https://arxiv.org/pdf/1607.06450.pdf
The operation has the following independently optional inputs:
* The cell-to-input weights (\f$W_{ci}\f$), cell-to-forget weights
(\f$W_{cf}\f$) and cell-to-output weights (\f$W_{co}\f$) either all have values or neither of them have values (i.e., all set to null). If they have values, the peephole optimization is used.
* The input-to-input weights (\f$W_{xi}\f$), recurrent-to-input weights
(\f$W_{hi}\f$) and input gate bias (\f$b_i\f$) either all have values,
or none of them have values. If they have no values, coupling of input and forget gates (CIFG) is used, in which case the input gate
(\f$i_t\f$) is calculated using the following equation instead.
\f{eqnarray*}{
i_t = 1 - f_t
\f}
In case peephole optimization is used and CIFG is not used cell-to-input (\f$W_{ci}\f$) weights must be present. Otherwise, the cell-to-input weights must have no value.
* The projection weights (\f$W_{proj}\f$) is required only for the recurrent projection layer, and should otherwise have no value.
* The projection bias (\f$b_{proj}\f$) may (but not required to) have a value if the recurrent projection layer exists, and should otherwise have no value.
* (API level 29 or later) The four layer normalization weights either all have values or none of them have values. Additionally, if CIFG is used,
input layer normalization weights tensor is omitted and the other layer normalization weights either all have values or none of them have values. Layer normalization is used when the values of all the layer normalization weights are present.
References:
The default non-peephole non-CIFG implementation is based on:
http://www.bioinf.jku.at/publications/older/2604.pdf <NAME> and <NAME>. “Long Short-Term Memory”. Neural Computation, 9(8):1735-1780, 1997.
The peephole implementation and projection layer is based on:
https://research.google.com/pubs/archive/43905.pdf <NAME>, <NAME>, and <NAME>. “Long short-term memory recurrent neural network architectures for large scale acoustic modeling.” INTERSPEECH, 2014.
(However, the concept of peephole optimization was introduced in work prior to this paper.)
The coupling of input and forget gate (CIFG) is based on:
http://arxiv.org/pdf/1503.04069.pdf Greff et al. “LSTM: A Search Space Odyssey”
The layer normalization is based on:
https://arxiv.org/pdf/1607.06450.pdf <NAME> et al. “Layer Normalization”
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
All input and output tensors must be of the same type.
Inputs:
* 0: The input (\f$x_t\f$).
A 2-D tensor of shape [batch_size, input_size], where “batch_size”
corresponds to the batching dimension, and “input_size” is the size of the input.
* 1: The input-to-input weights (\f$W_{xi}\f$). Optional.
A 2-D tensor of shape [num_units, input_size], where “num_units”
corresponds to the number of cell units.
* 2: The input-to-forget weights (\f$W_{xf}\f$).
A 2-D tensor of shape [num_units, input_size].
* 3: The input-to-cell weights (\f$W_{xc}\f$).
A 2-D tensor of shape [num_units, input_size].
* 4: The input-to-output weights (\f$W_{xo}\f$).
A 2-D tensor of shape [num_units, input_size].
* 5: The recurrent-to-input weights (\f$W_{hi}\f$). Optional.
A 2-D tensor of shape [num_units, output_size], where “output_size”
corresponds to either the number of cell units (i.e., “num_units”),
or the second dimension of the “projection_weights”, if defined.
* 6: The recurrent-to-forget weights (\f$W_{hf}\f$).
A 2-D tensor of shape [num_units, output_size].
* 7: The recurrent-to-cell weights (\f$W_{hc}\f$).
A 2-D tensor of shape [num_units, output_size].
* 8: The recurrent-to-output weights (\f$W_{ho}\f$).
A 2-D tensor of shape [num_units, output_size].
* 9: The cell-to-input weights (\f$W_{ci}\f$). Optional.
A 1-D tensor of shape [num_units].
* 10:The cell-to-forget weights (\f$W_{cf}\f$). Optional.
A 1-D tensor of shape [num_units].
* 11:The cell-to-output weights (\f$W_{co}\f$). Optional.
A 1-D tensor of shape [num_units].
* 12:The input gate bias (\f$b_i\f$). Optional.
A 1-D tensor of shape [num_units].
* 13:The forget gate bias (\f$b_f\f$).
A 1-D tensor of shape [num_units].
* 14:The cell bias (\f$b_c\f$).
A 1-D tensor of shape [num_units].
* 15:The output gate bias (\f$b_o\f$).
A 1-D tensor of shape [num_units].
* 16:The projection weights (\f$W_{proj}\f$). Optional.
A 2-D tensor of shape [output_size, num_units].
* 17:The projection bias (\f$b_{proj}\f$). Optional.
A 1-D tensor of shape [output_size].
* 18:The output state (in) (\f$h_{t-1}\f$).
A 2-D tensor of shape [batch_size, output_size].
* 19:The cell state (in) (\f$C_{t-1}\f$).
A 2-D tensor of shape [batch_size, num_units].
* 20:The activation function (\f$g\f$).
A value indicating the activation function:
+ 0: None;
+ 1: Relu;
+ 3: Relu6;
+ 4: Tanh;
+ 6: Sigmoid.
* 21:The clipping threshold (\f$t_{cell}\f$) for the cell state, such that values are bound within [-cell_clip, cell_clip]. If set to 0.0 then clipping is disabled.
Until API level 29 this scalar must be of type {@link ANEURALNETWORKS_FLOAT32}. Since API level 29, if all the input tensors have type {@link ANEURALNETWORKS_TENSOR_FLOAT32}, this scalar must be of the type {@link ANEURALNETWORKS_FLOAT32},
otherwise if all the input tensors have the type {@link ANEURALNETWORKS_TENSOR_FLOAT16}, this scalar must be of type {@link ANEURALNETWORKS_FLOAT16}.
* 22:The clipping threshold (\f$t_{proj}\f$) for the output from the projection layer, such that values are bound within
[-proj_clip, proj_clip]. If set to 0.0 then clipping is disabled.
Until API level 29 this scalar must be of type {@link ANEURALNETWORKS_FLOAT32}. Since API level 29, if all the input tensors have type {@link ANEURALNETWORKS_TENSOR_FLOAT32}, this scalar must be of the type {@link ANEURALNETWORKS_FLOAT32},
otherwise if all the input tensors have the type {@link ANEURALNETWORKS_TENSOR_FLOAT16}, this scalar must be of type {@link ANEURALNETWORKS_FLOAT16}.
Since API level 29 there are additional inputs to this op:
* 23:The input layer normalization weights.
A 1-D tensor of shape [num_units]. Used to rescale normalized inputs to activation at input gate.
* 24:The forget layer normalization weights.
A 1-D tensor of shape [num_units]. Used to rescale normalized inputs to activation at forget gate.
* 25:The cell layer normalization weights.
A 1-D tensor of shape [num_units]. Used to rescale normalized inputs to activation at cell gate.
* 26:The output layer normalization weights.
A 1-D tensor of shape [num_units]. Used to rescale normalized inputs to activation at output gate.
Outputs:
* 0: The scratch buffer.
A 2-D tensor of shape [batch_size, num_units * 3] with CIFG, or
[batch_size, num_units * 4] without CIFG.
* 1: The output state (out) (\f$h_t\f$).
A 2-D tensor of shape [batch_size, output_size].
* 2: The cell state (out) (\f$C_t\f$).
A 2-D tensor of shape [batch_size, num_units].
* 3: The output (\f$o_t\f$).
A 2-D tensor of shape [batch_size, output_size]. This is effectively the same as the current “output state (out)” value.
Available since API level 27.
### ANEURALNETWORKS_MAX_POOL_2D
Performs an 2-D max pooling operation.
The output dimensions are functions of the filter dimensions, stride, and padding.
The values in the output tensor are computed as:
```
output[b, i, j, channel] =
max_{di, dj} (
input[b, strides[1] * i + di, strides[2] * j + dj, channel]
)
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
NCHW is supported since API level 29.
Both explicit padding and implicit padding are supported.
Inputs (explicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the left, in the ‘width’ dimension.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the right, in the ‘width’ dimension.
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the top, in the ‘height’ dimension.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the bottom, in the ‘height’ dimension.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter width.
* 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter height.
* 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 10: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
Inputs (implicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit padding scheme, has to be one of the
{@link PaddingCode} values.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter width.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter height.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 7: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
Outputs:
* 0: The output 4-D tensor, of shape
[batches, out_height, out_width, depth].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 27.
### ANEURALNETWORKS_MUL
Multiplies two tensors, element-wise.
Takes two input tensors of identical {@link OperandCode} and compatible dimensions. The output is the product of both input tensors, optionally modified by an activation function.
Two dimensions are compatible when:
1. they are equal, or 2. one of them is 1
The size of the resulting output is the maximum size along each dimension of the input operands. It starts with the trailing dimensions, and works its way forward.
Since API level 29, generic zero-sized input tensor is supported. Zero dimension is only compatible with 0 or 1. The size of the output dimension is zero if either of corresponding input dimension is zero.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
* {@link ANEURALNETWORKS_TENSOR_INT32} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: A tensor.
* 1: A tensor of the same {@link OperandCode}, and compatible dimensions as input0.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
For a {@link ANEURALNETWORKS_TENSOR_INT32} tensor,
the {@link FuseCode} must be “NONE”.
Outputs:
* 0: The product, a tensor of the same {@link OperandCode} as input0.
For output tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the following condition must be satisfied:
output_scale > input1_scale * input2_scale.
Available since API level 27.
### ANEURALNETWORKS_RELU
Computes rectified linear activation on the input tensor element-wise.
The output is calculated using this formula:
```
output = max(0, input)
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4.
Inputs:
* 0: A tensor, specifying the input.
Since API level 29, this tensor may be zero-sized.
Outputs:
* 0: The output tensor of same shape as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 27.
### ANEURALNETWORKS_RELU1
Computes rectified linear 1 activation on the input tensor element-wise.
The output is calculated using this formula:
```
output = min(1.f, max(-1.f, input))
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4.
Inputs:
* 0: A tensor, specifying the input.
Since API level 29, this tensor may be zero-sized.
Outputs:
* 0: The output tensor of the same shape as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 27.
### ANEURALNETWORKS_RELU6
Computes rectified linear 6 activation on the input tensor element-wise.
The output is calculated using this formula:
```
output = min(6, max(0, input))
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4.
Inputs:
* 0: A tensor, specifying the input.
Since API level 29, this tensor may be zero-sized.
Outputs:
* 0: The output tensor of same shape as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 27.
### ANEURALNETWORKS_RESHAPE
Reshapes a tensor.
Given tensor, this operation returns a tensor that has the same values as tensor, but with a newly specified shape.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4.
Inputs:
* 0: A tensor, specifying the tensor to be reshaped.
* 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, defining the shape of the output tensor. The number of elements implied by shape must be the same as the number of elements in the input tensor.
If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, a shape of [-1] flattens into 1-D. At most one component of shape can be -1.
Outputs:
* 0: The output tensor, of shape specified by the input shape.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 27.
### ANEURALNETWORKS_RESIZE_BILINEAR
Resizes images to given size using the bilinear interpretation.
Resized images must be distorted if their output aspect ratio is not the same as input aspect ratio. The corner pixels of output may not be the same as corner pixels of input.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
NCHW is supported since API level 29.
Both resizing by shape and resizing by scale are supported.
Inputs (resizing by shape):
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output width of the output tensor.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output height of the output tensor.
* 3: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
* 4: Align corners. An optional {@link ANEURALNETWORKS_BOOL}
scalar, default to false. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels.
Available since API level 30.
* 5: Half pixel centers. An optional {@link ANEURALNETWORKS_BOOL}
scalar, default to false. If True, the pixel centers are assumed to be at (0.5, 0.5). This is the default behavior of image.resize in TF 2.0. If this parameter is True, then align_corners parameter must be False.
Available since API level 30.
Inputs (resizing by scale, since API level 29):
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input. Zero batches is supported for this tensor.
* 1: A scalar, specifying width_scale, the scaling factor of the width dimension from the input tensor to the output tensor. The output width is calculated as new_width = floor(width * width_scale).
The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of
{@link ANEURALNETWORKS_FLOAT32} otherwise.
* 2: A scalar, specifying height_scale, the scaling factor of the height dimension from the input tensor to the output tensor. The output height is calculated as new_height = floor(height * height_scale).
The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of
{@link ANEURALNETWORKS_FLOAT32} otherwise.
* 3: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
* 4: Align corners. An optional {@link ANEURALNETWORKS_BOOL}
scalar, default to false. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels.
Available since API level 30.
* 5: Half pixel centers. An optional {@link ANEURALNETWORKS_BOOL}
scalar, default to false. If True, the pixel centers are assumed to be at (0.5, 0.5). This is the default behavior of image.resize in TF 2.0. If this parameter is True, then align_corners parameter must be False.
Available since API level 30.
Outputs:
* 0: The output 4-D tensor, of shape
[batches, new_height, new_width, depth].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 27.
### ANEURALNETWORKS_RNN
A basic recurrent neural network layer.
This layer implements the operation:
outputs = state = activation(inputs * input_weights +
state * recurrent_weights + bias)
Where:
* “input_weights” is a weight matrix that multiplies the inputs;
* “recurrent_weights” is a weight matrix that multiplies the current
“state” which itself is the output from the previous time step computation;
* “bias” is a bias vector (added to each output vector in the batch);
* “activation” is the function passed as the “fused_activation_function”
argument (if not “NONE”).
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
The input tensors must all be the same type.
Inputs:
* 0: input.
A 2-D tensor of shape [batch_size, input_size], where “batch_size”
corresponds to the batching dimension, and “input_size” is the size of the input.
* 1: weights.
A 2-D tensor of shape [num_units, input_size], where “num_units”
corresponds to the number of units.
* 2: recurrent_weights.
A 2-D tensor of shape [num_units, num_units], with columns corresponding to the weights from each unit.
* 3: bias.
A 1-D tensor of shape [num_units].
* 4: hidden state (in).
A 2-D tensor of shape [batch_size, num_units].
* 5: fused_activation_function.
An optional {@link FuseCode} value indicating the activation function. If “NONE” is specified then it results in a linear activation.
Outputs:
* 0: hidden state (out).
A 2-D tensor of shape [batch_size, num_units].
* 1: output.
A 2-D tensor of shape [batch_size, num_units]. This is effectively the same as the current state value.
Available since API level 27.
### ANEURALNETWORKS_SOFTMAX
Computes the softmax activation on the input tensor element-wise, per batch, by normalizing the input vector so the maximum coefficient is zero.
The output is calculated using this formula:
```
output[batch, i] =
exp((input[batch, i] - max(input[batch, :])) * beta) /
sum_{k}{exp((input[batch, k] - max(input[batch, :])) * beta)}
```
For input tensor with rank other than 2, the activation will be applied independently on each 1-D slice along specified dimension.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4.
Tensors with rank other than 2 or 4 are only supported since API level 29.
Inputs:
* 0: A 2-D or 4-D tensor, specifying the tensor to be reshaped.
Since API level 29, this tensor may be zero-sized.
* 1: A scalar, specifying the positive scaling factor for the exponent,
beta. If input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT32},
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, the scalar must be of {@link ANEURALNETWORKS_FLOAT32}.
If input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, then the scalar must be of {@link ANEURALNETWORKS_FLOAT16}.
* 2: An optional {@link ANEURALNETWORKS_INT32} scalar, default to -1,
specifying the dimension the activation would be performed on.
Negative index is used to specify axis from the end (e.g. -1 for the last axis). Must be in the range [-n, n).
Available since API level 29.
Outputs:
* 0: The output tensor of same shape as input0.
For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
the scale must be 1.f / 256 and the zeroPoint must be 0.
For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the scale must be 1.f / 256 and the zeroPoint must be -128.
Available since API level 27.
### ANEURALNETWORKS_SPACE_TO_DEPTH
Rearranges blocks of spatial data, into depth.
More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the depth dimension. The value block_size indicates the input block size and how the data is moved.
Chunks of data of size block_size * block_size from depth are rearranged into non-overlapping blocks of size block_size x block_size.
The depth of the output tensor is input_depth * block_size * block_size.
The input tensor’s height and width must be divisible by block_size.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
NCHW is supported since API level 29.
Inputs:
* 0: A 4-D tensor, of shape [batches, height, width, depth_in],
specifying the input.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the block_size.
block_size must be >=1 and block_size must be a divisor of both the input height and width.
* 2: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
Outputs:
* 0: The output 4-D tensor, of shape [batches, height/block_size,
width/block_size, depth_in*block_size*block_size].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 27.
### ANEURALNETWORKS_SVDF
SVDF op is a kind of stateful layer derived from the notion that a densely connected layer that’s processing a sequence of input frames can be approximated by using a singular value decomposition of each of its nodes. The implementation is based on:
https://research.google.com/pubs/archive/43813.pdf
<NAME>, <NAME>, <NAME>, <NAME>.
“Compressing Deep Neural Networks using a Rank-Constrained Topology”.
INTERSPEECH, 2015.
It processes the incoming input using a 2-stage filtering mechanism:
* stage 1 performs filtering on the “features” dimension, whose outputs get pushed into a memory of fixed-size memory_size.
* stage 2 performs filtering on the “time” dimension of the memory_size memoized outputs of stage 1.
Specifically, for rank 1, this layer implements the operation:
```
memory = push(conv1d(inputs, weights_feature, feature_dim,
"ANEURALNETWORKS_PADDING_VALID"));
outputs = activation(memory * weights_time + bias);
```
Where:
* “weights_feature” is a weights matrix that processes the inputs (by convolving the input with every “feature filter”), and whose outputs get pushed, stacked in order, into the fixed-size “memory” (the oldest entry gets dropped);
* “weights_time” is a weights matrix that processes the “memory” (by a batched matrix multiplication on the num_units);
* “bias” is an optional bias vector (added to each output vector in the batch); and
* “activation” is the function passed as the “fused_activation_function”
argument (if not “NONE”).
Each rank adds a dimension to the weights matrices by means of stacking the filters.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
All input tensors must be the same type.
Inputs:
* 0: input.
A 2-D tensor of shape [batch_size, input_size], where “batch_size”
corresponds to the batching dimension, and “input_size” is the size of the input.
* 1: weights_feature.
A 2-D tensor of shape [num_units, input_size], where “num_units”
corresponds to the number of units.
* 2: weights_time.
A 2-D tensor of shape [num_units, memory_size], where “memory_size”
corresponds to the fixed-size of the memory.
* 3: bias.
An optional 1-D tensor of shape [num_units].
* 4: state (in).
A 2-D tensor of shape [batch_size, (memory_size - 1) * num_units * rank].
* 5: rank.
The rank of the SVD approximation.
* 6: fused_activation_function.
An optional {@link FuseCode} value indicating the activation function. If “NONE” is specified then it results in a linear activation.
Outputs:
* 0: state (out).
A 2-D tensor of the same {@link OperandCode} as the inputs, with shape
[batch_size, (memory_size - 1) * num_units * rank].
* 1: output.
A 2-D tensor of the same {@link OperandCode} as the inputs, with shape
[batch_size, num_units].
Available since API level 27.
### ANEURALNETWORKS_TANH
Computes hyperbolic tangent of input tensor element-wise.
The output is calculated using this formula:
```
output = tanh(input)
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4.
Inputs:
* 0: A tensor, specifying the input.
Since API level 29, this tensor may be zero-sized.
Outputs:
* 0: The output tensor of same shape as input0.
For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
the scale must be 1.f / 128 and the zeroPoint must be 128.
For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the scale must be 1.f / 128 and the zeroPoint must be 0.
Available since API level 27.
### ANEURALNETWORKS_BATCH_TO_SPACE_ND
BatchToSpace for N-dimensional tensors.
This operation reshapes the batch dimension (dimension 0) into M + 1 dimensions of shape block_shape + [batch], interleaves these blocks back into the grid defined by the spatial dimensions [1, …, M], to obtain a result with the same rank as the input.
This is the reverse of SpaceToBatch.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
NCHW is supported since API level 29.
Inputs:
* 0: An n-D tensor, specifying the tensor to be reshaped
* 1: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the block sizes for each spatial dimension of the input tensor. All values must be >= 1.
* 2: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 28.
### ANEURALNETWORKS_DIV
Element-wise division of two tensors.
Takes two input tensors of identical {@link OperandCode} and compatible dimensions. The output is the result of dividing the first input tensor by the second, optionally modified by an activation function.
For inputs of {@link ANEURALNETWORKS_TENSOR_INT32}, performs
“floor division” (“//” in Python). For example,
5 // 2 = 2
-5 // 2 = -3
Two dimensions are compatible when:
1. they are equal, or 2. one of them is 1
The size of the output is the maximum size along each dimension of the input operands. It starts with the trailing dimensions, and works its way forward.
Example:
input1.dimension = {4, 1, 2}
input2.dimension = {5, 4, 3, 1}
output.dimension = {5, 4, 3, 2}
Since API level 29, generic zero-sized input tensor is supported. Zero dimension is only compatible with 0 or 1. The size of the output dimension is zero if either of corresponding input dimension is zero.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor, specifying the first input.
* 1: A tensor of the same {@link OperandCode}, and compatible dimensions as input0.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
For a {@link ANEURALNETWORKS_TENSOR_INT32} tensor,
the {@link FuseCode} must be “NONE”.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
Available since API level 28.
### ANEURALNETWORKS_MEAN
Computes the mean of elements across dimensions of a tensor.
Reduces the input tensor along the given dimensions to reduce. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: A tensor, specifying the input.
* 1: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions to reduce. Must be in the range
[-rank(input_tensor), rank(input_tensor)).
NOTE: When the operation was introduced, the documentation incorrectly stated that if dimensions were empty, the operation would reduce across all dimensions. This behavior was never implemented.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, keep_dims. If positive,
retains reduced dimensions with length 1.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
If all dimensions are reduced and keep_dims is false, the output shape is [1].
Available since API level 28.
### ANEURALNETWORKS_PAD
Pads a tensor.
This operation pads a tensor according to the specified paddings.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
(full support since API level 29, see the output section)
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor, specifying the tensor to be padded.
* 1: A 2-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the paddings for each spatial dimension of the input tensor. The shape of the tensor must be {rank(input0), 2}.
padding[i, 0] specifies the number of elements to be padded in the front of dimension i.
padding[i, 1] specifies the number of elements to be padded after the end of dimension i.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0. The output tensor has the same rank as input0, and each dimension of the output tensor has the same size as the corresponding dimension of the input tensor plus the size of the padding:
output0.dimension[i] =
padding[i, 0] + input0.dimension[i] + padding[i, 1]
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
NOTE: Before API level 29, the pad value for
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} is undefined.
Since API level 29, the pad value is always the logical zero.
Available since API level 28.
### ANEURALNETWORKS_SPACE_TO_BATCH_ND
SpaceToBatch for N-Dimensional tensors.
This operation divides “spatial” dimensions [1, …, M] of the input into a grid of blocks of shape block_shape, and interleaves these blocks with the “batch” dimension (0) such that in the output, the spatial dimensions
[1, …, M] correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according to paddings.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
(full support since API level 29, see the output section)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
NCHW is supported since API level 29.
Inputs:
* 0: An n-D tensor, specifying the input.
* 1: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the block sizes for each spatial dimension of the input tensor. All values must be >= 1.
* 2: A 2-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the paddings for each spatial dimension of the input tensor. All values must be
>= 0. The shape of the tensor must be {M, 2}, where M is the number of spatial dimensions.
padding[i, 0] specifies the number of element to be padded in the front of dimension i.
padding[i, 1] specifies the number of element to be padded after the end of dimension i.
* 3: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
Available since API level 29.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
NOTE: Before API level 29, the pad value for
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} is undefined.
Since API level 29, the pad value is always the logical zero.
Available since API level 28.
### ANEURALNETWORKS_SQUEEZE
Removes dimensions of size 1 from the shape of a tensor.
Given a tensor input, this operation returns a tensor of the same
{@link OperandCode} with all dimensions of size 1 removed. If you don’t want to remove all size 1 dimensions, you can remove specific size 1 dimensions by specifying the axes (input1).
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor, the tensor to be squeezed.
* 1: An optional 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions to squeeze. If specified only squeezes the dimensions listed. Otherwise, squeezes all dimensions. The dimension index starts at 0. An error must be reported if squeezing a dimension that is not 1.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0. Contains the same data as input, but has one or more dimensions of size 1 removed.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
If all input dimensions are equal to 1 and are to be squeezed, the output shape is [1].
Available since API level 28.
### ANEURALNETWORKS_STRIDED_SLICE
Extracts a strided slice of a tensor.
Roughly speaking, this op extracts a slice of size (end - begin) / stride from the given input tensor. Starting at the location specified by begin the slice continues by adding stride to the index until all dimensions are not less than end. Note that a stride can be negative, which causes a reverse slice.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor, specifying the tensor to be sliced.
* 1: begin, a 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The starts of the dimensions of the input tensor to be sliced. The length must be of rank(input0).
* 2: end, a 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The ends of the dimensions of the input tensor to be sliced. The length must be of rank(input0).
* 3: strides, a 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The strides of the dimensions of the input tensor to be sliced. The length must be of rank(input0). The entries must be non-zero.
* 4: begin_mask, an {@link ANEURALNETWORKS_INT32} scalar. If the ith bit of begin_mask is set, begin[i] is ignored and the fullest possible range in that dimension is used instead.
* 5: end_mask, an {@link ANEURALNETWORKS_INT32} scalar. If the ith bit of end_mask is set, end[i] is ignored and the fullest possible range in that dimension is used instead.
* 6: shrink_axis_mask, an {@link ANEURALNETWORKS_INT32} scalar. If the ith bit of shrink_axis_mask is set, the ith dimension specification shrinks the dimensionality by 1, taking on the value at index begin[i]. In this case, the ith specification must define a slice of size 1, e.g. begin[i] = x, end[i] = x + 1.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0 and rank (n - k),
where k is the number of bits set in shrink_axis_mask.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
If shrink_axis_mask is true for all input dimensions, the output shape is [1].
Available since API level 28.
### ANEURALNETWORKS_SUB
Element-wise subtraction of two tensors.
Takes two input tensors of identical {@link OperandCode} and compatible dimensions. The output is the result of subtracting the second input tensor from the first one, optionally modified by an activation function.
Two dimensions are compatible when:
1. they are equal, or 2. one of them is 1
The size of the output is the maximum size along each dimension of the input operands. It starts with the trailing dimensions, and works its way forward.
Example:
input1.dimension = {4, 1, 2}
input2.dimension = {5, 4, 3, 1}
output.dimension = {5, 4, 3, 2}
Since API level 29, generic zero-sized input tensor is supported. Zero dimension is only compatible with 0 or 1. The size of the output dimension is zero if either of corresponding input dimension is zero.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
* {@link ANEURALNETWORKS_TENSOR_INT32} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor, specifying the first input.
* 1: A tensor of the same {@link OperandCode}, and compatible dimensions as input0.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
For a {@link ANEURALNETWORKS_TENSOR_INT32} tensor,
the {@link FuseCode} must be “NONE”.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint can be different from inputs’ scale and zeroPoint.
Available since API level 28.
### ANEURALNETWORKS_TRANSPOSE
Transposes the input tensor, permuting the dimensions according to the perm tensor.
The returned tensor’s dimension i corresponds to the input dimension perm[i]. If perm is not given, it is set to (n-1…0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since API level 29)
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor, specifying the tensor to be transposed.
Since API level 29, this tensor may be zero-sized.
* 1: An optional 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32},
the permutation of the dimensions of the input tensor.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 28.
### ANEURALNETWORKS_ABS
Computes the absolute value of a tensor, element-wise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32} (since API level 30)
Supported tensor rank: from 1.
Inputs:
* 0: A tensor.
Outputs:
* 0: The output tensor of same shape as input0.
Available since API level 29.
### ANEURALNETWORKS_ARGMAX
Returns the index of the largest element along an axis.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: An n-D tensor specifying the input. Must be non-empty.
* 1: An {@link ANEURALNETWORKS_INT32} scalar specifying the axis to reduce across. Negative index is used to specify axis from the end (e.g. -1 for the last axis). Must be in the range [-n, n).
Outputs:
* 0: An (n - 1)-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor.
If input is 1-dimensional, the output shape is [1].
Available since API level 29.
### ANEURALNETWORKS_ARGMIN
Returns the index of the smallest element along an axis.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: An n-D tensor specifying the input. Must be non-empty.
* 1: An {@link ANEURALNETWORKS_INT32} scalar specifying the axis to reduce across. Negative index is used to specify axis from the end (e.g. -1 for the last axis). Must be in the range [-n, n).
Outputs:
* 0: An (n - 1)-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor.
If input is 1-dimensional, the output shape is [1].
Available since API level 29.
### ANEURALNETWORKS_AXIS_ALIGNED_BBOX_TRANSFORM
Transform axis-aligned bounding box proposals using bounding box deltas.
Given the positions of bounding box proposals and the corresponding bounding box deltas for each class, return the refined bounding box regions. The resulting bounding boxes are cliped against the edges of the image.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}
Inputs:
* 0: A 2-D Tensor of shape [num_rois, 4], specifying the locations of the bounding box proposals, each line with format [x1, y1, x2, y2].
For tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM},
the zeroPoint must be 0 and the scale must be 0.125. Zero num_rois is supported for this tensor.
* 1: A 2-D Tensor of shape [num_rois, num_classes * 4], specifying the bounding box delta for each region of interest and each class. The bounding box deltas are organized in the following order
[dx, dy, dw, dh], where dx and dy is the relative correction factor for the center position of the bounding box with respect to the width and height, dw and dh is the log-scale relative correction factor for the width and height. For input0 of type
{@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, this tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}. Zero num_rois is supported for this tensor.
* 2: An 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape
[num_rois], specifying the batch index of each box. Boxes with the same batch index are grouped together. Zero num_rois is supported for this tensor.
* 3: A 2-D Tensor of shape [batches, 2], specifying the information of each image in the batch, each line with format
[image_height, image_width].
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0, with shape
[num_rois, num_classes * 4], specifying the coordinates of each output bounding box for each class, with format [x1, y1, x2, y2].
For type of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, the scale must be 0.125 and the zero point must be 0.
Available since API level 29.
### ANEURALNETWORKS_BIDIRECTIONAL_SEQUENCE_LSTM
A recurrent neural network layer that applies an LSTM cell to a sequence of inputs in forward and backward directions.
The op supports cross-linking via an auxiliary input. Regular cell feeds one input into the two RNN cells in the following way:
```
INPUT (INPUT_REVERSED)
| |
```
---
| FW_LSTM BW_LSTM |
| --- |
| |
| FW_OUT BW_OUT |
An op with cross-linking takes two inputs and feeds them into the RNN cells in the following way:
```
AUX_INPUT (AUX_INPUT_REVERSED)
| |
INPUT | (INPUT_R'D.)|
| | | |
```
---
##### | \ / \ / |
| FW_LSTM BW_LSTM |
```
| |
FW_OUT BW_OUT
```
The cross-linking mode is enabled iff auxiliary input and auxiliary weights are present. While stacking this op on top of itself, this allows to connect both forward and backward outputs from previous cell to the next cell’s input.
Since API level 30 parallel linking mode is supported. The mode is enabled if auxiliary input is present but auxiliary weights are omitted.
In this case, the cell feeds inputs into the RNN in the following way:
```
INPUT (AUX_INPUT_REVERSED)
| |
```
---
| FW_LSTM BW_LSTM |
| --- |
| |
| FW_OUT BW_OUT |
While stacking this op on top of itself, this allows to connect both forward and backward outputs from previous cell to the next cell’s corresponding inputs.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: 3, either time-major or batch-major.
All input and output tensors must be of the same type.
Inputs:
* 0: The input.
A 3-D tensor of shape:
If time-major: [max_time, batch_size, input_size]
If batch-major: [batch_size, max_time, input_size]
where “max_time” is the number of timesteps (sequence length),
“batch_size” corresponds to the batching dimension, and
“input_size” is the size of the input.
* 1: The forward input-to-input weights. Optional.
A 2-D tensor of shape [fw_num_units, input_size], where “fw_num_units”
corresponds to the number of forward cell units.
* 2: The forward input-to-forget weights.
A 2-D tensor of shape [fw_num_units, input_size].
* 3: The forward input-to-cell weights.
A 2-D tensor of shape [fw_num_units, input_size].
* 4: The forward input-to-output weights.
A 2-D tensor of shape [fw_num_units, input_size].
* 5: The forward recurrent-to-input weights. Optional.
A 2-D tensor of shape [fw_num_units, fw_output_size], where “fw_output_size”
corresponds to either the number of cell units (i.e., fw_num_units),
or the second dimension of the “fw_projection_weights”, if defined.
* 6: The forward recurrent-to-forget weights.
A 2-D tensor of shape [fw_num_units, fw_output_size].
* 7: The forward recurrent-to-cell weights.
A 2-D tensor of shape [fw_num_units, fw_output_size].
* 8: The forward recurrent-to-output weights.
A 2-D tensor of shape [fw_num_units, fw_output_size].
* 9: The forward cell-to-input weights. Optional.
A 1-D tensor of shape [fw_num_units].
* 10: The forward cell-to-forget weights. Optional.
A 1-D tensor of shape [fw_num_units].
* 11: The forward cell-to-output weights. Optional.
A 1-D tensor of shape [fw_num_units].
* 12: The forward input gate bias. Optional.
A 1-D tensor of shape [fw_num_units].
* 13: The forward forget gate bias.
A 1-D tensor of shape [fw_num_units].
* 14: The forward cell gate bias.
A 1-D tensor of shape [fw_num_units].
* 15: The forward output gate bias.
A 1-D tensor of shape [fw_num_units].
* 16: The forward projection weights. Optional.
A 2-D tensor of shape [fw_output_size, fw_num_units].
* 17: The forward projection bias. Optional.
A 1-D tensor of shape [fw_output_size].
* 18: The backward input-to-input weights. Optional.
A 2-D tensor of shape [bw_num_units, input_size], where “bw_num_units”
corresponds to the number of backward cell units.
* 19: The backward input-to-forget weights.
A 2-D tensor of shape [bw_num_units, input_size].
* 20: The backward input-to-cell weights.
A 2-D tensor of shape [bw_num_units, input_size].
* 21: The backward input-to-output weights.
A 2-D tensor of shape [bw_num_units, input_size].
* 22: The backward recurrent-to-input weights. Optional.
A 2-D tensor of shape [bw_num_units, bw_output_size], where “bw_output_size”
corresponds to either the number of cell units (i.e., “bw_num_units”),
or the second dimension of the “bw_projection_weights”, if defined.
* 23: The backward recurrent-to-forget weights.
A 2-D tensor of shape [bw_num_units, bw_output_size].
* 24: The backward recurrent-to-cell weights.
A 2-D tensor of shape [bw_num_units, bw_output_size].
* 25: The backward recurrent-to-output weights.
A 2-D tensor of shape [bw_num_units, bw_output_size].
* 26: The backward cell-to-input weights. Optional.
A 1-D tensor of shape [bw_num_units].
* 27: The backward cell-to-forget weights. Optional.
A 1-D tensor of shape [bw_num_units].
* 28: The backward cell-to-output weights. Optional.
A 1-D tensor of shape [bw_num_units].
* 29: The backward input gate bias. Optional.
A 1-D tensor of shape [bw_num_units].
* 30: The backward forget gate bias.
A 1-D tensor of shape [bw_num_units].
* 31: The backward cell gate bias.
A 1-D tensor of shape [bw_num_units].
* 32: The backward output gate bias.
A 1-D tensor of shape [bw_num_units].
* 33: The backward projection weights. Optional.
A 2-D tensor of shape [bw_output_size, bw_num_units].
* 34: The backward projection bias. Optional.
A 1-D tensor of shape [bw_output_size].
* 35: The forward input activation state.
A 2-D tensor of shape [batch_size, bw_output_size].
* 36: The forward input cell state.
A 2-D tensor of shape [batch_size, bw_num_units].
* 37: The backward input activation state.
A 2-D tensor of shape [batch_size, bw_output_size].
* 38: The backward input cell state.
A 2-D tensor of shape [batch_size, bw_num_units].
* 39: The auxiliary input. Optional.
A 3-D tensor of shape [max_time, batch_size, aux_input_size],
where “batch_size” corresponds to the batching dimension, and
“aux_input_size” is the size of the auxiliary input. Optional. See the docs above for the usage modes explanation.
* 40: The forward auxiliary input-to-input weights.
Optional. See the docs above for the usage modes explanation.
A 2-D tensor of shape [fw_num_units, aux_input_size].
* 41: The forward auxiliary input-to-forget weights.
Optional. See the docs above for the usage modes explanation.
A 2-D tensor of shape [fw_num_units, aux_input_size].
* 42: The forward auxiliary input-to-cell weights.
Optional. See the docs above for the usage modes explanation.
A 2-D tensor of shape [fw_num_units, aux_input_size].
* 43: The forward auxiliary input-to-output weights.
Optional. See the docs above for the usage modes explanation.
A 2-D tensor of shape [fw_num_units, aux_input_size].
* 44: The backward auxiliary input-to-input weights.
Optional. See the docs above for the usage modes explanation.
A 2-D tensor of shape [bw_num_units, aux_input_size].
* 45: The backward auxiliary input-to-forget weights.
Optional. See the docs above for the usage modes explanation.
A 2-D tensor of shape [bw_num_units, aux_input_size].
* 46: The backward auxiliary input-to-cell weights.
Optional. See the docs above for the usage modes explanation.
A 2-D tensor of shape [bw_num_units, aux_input_size].
* 47: The backward auxiliary input-to-output weights.
Optional. See the docs above for the usage modes explanation.
A 2-D tensor of shape [bw_num_units, aux_input_size].
* 48: The activation function.
A value indicating the activation function:
+ 0: None;
+ 1: Relu;
+ 3: Relu6;
+ 4: Tanh;
+ 6: Sigmoid.
* 49: The clipping threshold for the cell state, such that values are bound within [-cell_clip, cell_clip]. If set to 0.0 then clipping is disabled.
If all the input tensors have type {@link ANEURALNETWORKS_TENSOR_FLOAT32},
this scalar must be of the type {@link ANEURALNETWORKS_FLOAT32},
otherwise if all the input tensors have the type
{@link ANEURALNETWORKS_TENSOR_FLOAT16}, this scalar must be of type {@link ANEURALNETWORKS_FLOAT16}.
* 50: The clipping threshold for the output from the projection layer, such that values are bound within
[-proj_clip, proj_clip]. If set to 0.0 then clipping is disabled.
If all the input tensors have type {@link ANEURALNETWORKS_TENSOR_FLOAT32},
this scalar must be of the type {@link ANEURALNETWORKS_FLOAT32},
otherwise if all the input tensors have the type
{@link ANEURALNETWORKS_TENSOR_FLOAT16}, this scalar must be of type {@link ANEURALNETWORKS_FLOAT16}.
* 51: merge_outputs An {@link ANEURALNETWORKS_BOOL} scalar specifying if the outputs from forward and backward cells should be merged.
* 52: time_major An {@link ANEURALNETWORKS_BOOL} scalar specifying the shape format of input and output tensors.
* 53: The forward input layer normalization weights. Optional.
A 1-D tensor of shape [fw_num_units]. Used to rescale normalized inputs to activation at input gate.
* 54: The forward forget layer normalization weights. Optional.
A 1-D tensor of shape [fw_num_units]. Used to rescale normalized inputs to activation at forget gate.
* 55: The forward cell layer normalization weights. Optional.
A 1-D tensor of shape [fw_num_units]. Used to rescale normalized inputs to activation at cell gate.
* 56: The forward output layer normalization weights. Optional.
A 1-D tensor of shape [fw_num_units]. Used to rescale normalized inputs to activation at output gate.
* 57: The backward input layer normalization weights. Optional.
A 1-D tensor of shape [bw_num_units]. Used to rescale normalized inputs to activation at input gate.
* 58: The backward forget layer normalization weights. Optional.
A 1-D tensor of shape [bw_num_units]. Used to rescale normalized inputs to activation at forget gate.
* 59: The backward cell layer normalization weights. Optional.
A 1-D tensor of shape [bw_num_units]. Used to rescale normalized inputs to activation at cell gate.
* 60: The backward output layer normalization weights. Optional.
A 1-D tensor of shape [bw_num_units]. Used to rescale normalized inputs to activation at output gate.
Outputs:
* 0: The forward output.
A 3-D tensor of shape:
If time-major and not merge_outputs:
[max_time, batch_size, fw_output_size]
If time-major and merge_outputs:
[max_time, batch_size, fw_output_size + bw_output_size]
If batch-major and not merge_outputs:
[batch_size, max_time, fw_output_size]
If batch-major and merge_outputs:
[batch_size, max_time, fw_output_size + bw_output_size]
* 1: The backward output. Unused if merge_outputs is true.
A 3-D tensor of shape:
If time-major: [max_time, batch_size, bw_output_size]
If batch-major: [batch_size, max_time, bw_output_size]
* 2: The forward activation state output.
A 2-D tensor of shape [batch_size, fw_output_size] containing an activation state from the last time step in the sequence. This output is optional and can be omitted. If this output is present then outputs 3-5 must be present as well.
Available since API level 30.
* 3: The forward cell state output.
A tensor of shape [batch_size, fw_cell_size] containing a cell state from the last time step in the sequence. This output is optional and can be omitted. If this output is present then outputs 2, 4, 5 must be present as well.
Available since API level 30.
* 4: The backward activation state output.
A 2-D tensor of shape [batch_size, bw_output_size] containing an activation state from the last time step in the sequence. This output is optional and can be omitted. If this output is present then outputs 2, 3, 5 must be present as well.
Available since API level 30.
* 5: The backward cell state output.
A tensor of shape [batch_size, bw_cell_size] containing a cell state from the last time step in the sequence. This output is optional and can be omitted. If this output is present then outputs 2-4 must be present as well.
Available since API level 30.
Available since API level 29.
Important: As of API level 29, there is no way to get the output state tensors out and NNAPI does not maintain internal states. This operator does not support the usage pattern in which multiple cells are chained and state tensors are propagated.
### ANEURALNETWORKS_BIDIRECTIONAL_SEQUENCE_RNN
A recurrent neural network layer that applies a basic RNN cell to a sequence of inputs in forward and backward directions.
This Op unrolls the input along the sequence dimension, and implements the following operation for each element in the sequence s =
1…sequence_length:
fw_outputs[s] = fw_state = activation(inputs[s] * fw_input_weights’ +
fw_state * fw_recurrent_weights’ + fw_bias)
And for each element in sequence t = sequence_length : 1 bw_outputs[t] = bw_state = activation(inputs[t] * bw_input_weights’ +
bw_state * bw_recurrent_weights’ + bw_bias)
Where:
* “{fw,bw}_input_weights” is a weight matrix that multiplies the inputs;
* “{fw,bw}_recurrent_weights” is a weight matrix that multiplies the current “state” which itself is the output from the previous time step computation;
* “{fw,bw}_bias” is a bias vector (added to each output vector in the batch);
* “activation” is the function passed as the “fused_activation_function”
argument (if not “NONE”).
The op supports cross-linking via an auxiliary input. Regular cell feeds one input into the two RNN cells in the following way:
```
INPUT (INPUT_REVERSED)
| |
```
---
| FW_RNN BW_RNN |
| --- |
| |
| FW_OUT BW_OUT |
An op with cross-linking takes two inputs and feeds them into the RNN cells in the following way:
```
AUX_INPUT (AUX_INPUT_REVERSED)
| |
INPUT | (INPUT_R'D.)|
| | | |
```
---
##### | \ / \ / |
| FW_RNN BW_RNN |
```
| |
FW_OUT BW_OUT
```
The cross-linking mode is enabled iff auxiliary input and auxiliary weights are present. While stacking this op on top of itself, this allows to connect both forward and backward outputs from previous cell to the next cell’s input.
Since API level 30 parallel linking mode is supported. The mode is enabled if auxiliary input is present but auxiliary weights are omitted.
In this case, the cell feeds inputs into the RNN in the following way:
```
INPUT (AUX_INPUT_REVERSED)
| |
```
---
| FW_RNN BW_RNN |
| --- |
| |
| FW_OUT BW_OUT |
While stacking this op on top of itself, this allows to connect both forward and backward outputs from previous cell to the next cell’s corresponding inputs.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
The input tensors must all be the same type.
Inputs:
* 0: input.
A 3-D tensor. The shape is defined by the input 6 (timeMajor). If it is set to true, then the input has a shape [maxTime, batchSize,
inputSize], otherwise the input has a shape [batchSize, maxTime,
inputSize].
* 1: fwWeights.
A 2-D tensor of shape [fwNumUnits, inputSize].
* 2: fwRecurrentWeights.
A 2-D tensor of shape [fwNumUnits, fwNumUnits].
* 3: fwBias.
A 1-D tensor of shape [fwNumUnits].
* 4: fwHiddenState.
A 2-D tensor of shape [batchSize, fwNumUnits]. Specifies a hidden state input for the first time step of the computation.
* 5: bwWeights.
A 2-D tensor of shape [bwNumUnits, inputSize].
* 6: bwRecurrentWeights.
A 2-D tensor of shape [bwNumUnits, bwNumUnits].
* 7: bwBias.
A 1-D tensor of shape [bwNumUnits].
* 8: bwHiddenState A 2-D tensor of shape [batchSize, bwNumUnits]. Specifies a hidden state input for the first time step of the computation.
* 9: auxInput.
A 3-D tensor. The shape is defined by the input 6 (timeMajor). If it is set to true, then the input has a shape [maxTime, batchSize,
auxInputSize], otherwise the input has a shape [batchSize, maxTime,
auxInputSize]. Can be omitted. See the docs above for the usage modes explanation.
* 10:fwAuxWeights.
A 2-D tensor of shape [fwNumUnits, auxInputSize]. Can be omitted.
See the docs above for the usage modes explanation.
* 11:bwAuxWeights.
A 2-D tensor of shape [bwNumUnits, auxInputSize]. Can be omitted.
See the docs above for the usage modes explanation.
* 12:fusedActivationFunction.
A {@link FuseCode} value indicating the activation function. If
“NONE” is specified then it results in a linear activation.
* 13:timeMajor An {@link ANEURALNETWORKS_BOOL} scalar specifying the shape format of input and output tensors.
* 14:mergeOutputs An {@link ANEURALNETWORKS_BOOL} scalar specifying if the outputs from forward and backward cells are separate (if set to false) or concatenated (if set to true).
Outputs:
* 0: fwOutput.
A 3-D tensor. The first two dimensions of the shape are defined by the input 6 (timeMajor) and the third dimension is defined by the input 14 (mergeOutputs). If timeMajor is set to true, then the first two dimensions are [maxTime, batchSize], otherwise they are set to
[batchSize, maxTime]. If mergeOutputs is set to true, then the third dimension is equal to (fwNumUnits + bwNumUnits), otherwise it is set to fwNumUnits.
* 1: bwOutput.
A 3-D tensor. If the input 14 (mergeOutputs) is set to true, then this tensor is not produced. The shape is defined by the input 6
(timeMajor). If it is set to true, then the shape is set to
[maxTime, batchSize, bwNumUnits], otherwise the shape is set to
[batchSize, maxTime, bwNumUnits].
* 2: The forward hidden state output.
A 2-D tensor of shape [batchSize, fwNumUnits] containing a hidden state from the last time step in the sequence. This output is optional and can be omitted. If this output is present then output 3 must be present as well.
Available since API level 30.
* 3: The backward hidden state output.
A 2-D tensor of shape [batchSize, bwNumUnits] containing a hidden state from the last time step in the sequence. This output is optional and can be omitted. If this output is present then output 2 must be present as well.
Available since API level 30.
Available since API level 29.
Important: As of API level 29, there is no way to get the output state tensors out and NNAPI does not maintain internal states. This operator does not support the usage pattern in which multiple cells are chained and state tensors are propagated.
### ANEURALNETWORKS_BOX_WITH_NMS_LIMIT
Greedily selects a subset of bounding boxes in descending order of score.
This op applies NMS algorithm to each class. In each loop of execution,
the box with maximum score gets selected and removed from the pending set.
The scores of the rest of boxes are lowered according to the intersection-over-union (IOU) overlapping with the previously selected boxes and a specified NMS kernel method. Any boxes with score less than a threshold are removed from the pending set.
Three NMS kernels are supported:
* Hard: score_new = score_old * (1 if IoU < threshold else 0)
* Linear: score_new = score_old * (1 if IoU < threshold else 1 - IoU)
* Gaussian: score_new = score_old * exp(- IoU^2 / sigma)
Axis-aligned bounding boxes are represented by its upper-left corner coordinate (x1,y1) and lower-right corner coordinate (x2,y2). A valid bounding box should satisfy x1 <= x2 and y1 <= y2.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Inputs:
* 0: A 2-D Tensor of shape [num_rois, num_classes], specifying the score of each bounding box proposal. The boxes are grouped by batches in the first dimension. Zero num_rois is supported for this tensor.
* 1: A 2-D Tensor specifying the bounding boxes of shape
[num_rois, num_classes * 4], organized in the order [x1, y1, x2, y2].
The boxes are grouped by batches in the first dimension. The sequential order of the boxes corresponds with input0. For input0 of type
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, this tensor should be of
{@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, with zeroPoint of 0 and scale of 0.125.
For input0 of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
this tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM},
with zeroPoint of -128 and scale of 0.125.
Zero num_rois is supported for this tensor.
* 2: A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape
[num_rois], specifying the batch index of each box. Boxes with the same batch index are grouped together.
* 3: An {@link ANEURALNETWORKS_FLOAT32} scalar, score_threshold. Boxes with scores lower than the threshold are filtered before sending to the NMS algorithm.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the maximum number of selected bounding boxes for each image. Set to a negative value for unlimited number of output bounding boxes.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the NMS kernel method, options are 0:hard, 1:linear, 2:gaussian.
* 6: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the IoU threshold in hard and linear NMS kernel. This field is ignored if gaussian kernel is selected.
* 7: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the sigma in gaussian NMS kernel. This field is ignored if gaussian kernel is not selected.
* 8: An {@link ANEURALNETWORKS_FLOAT32} scalar, nms_score_threshold.
Boxes with scores lower than the threshold are dropped during the score updating phase in soft NMS.
Outputs:
* 0: A 1-D Tensor of the same {@link OperandCode} as input0, with shape
[num_output_rois], specifying the score of each output box. The boxes are grouped by batches, but the sequential order in each batch is not guaranteed. For type of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
guaranteed. For type of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
or {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the scale and zero point must be the same as input0.
* 1: A 2-D Tensor of the same {@link OperandCode} as input1, with shape
[num_output_rois, 4], specifying the coordinates of each output bounding box with the same format as input1. The sequential order of the boxes corresponds with output0. For type of
{@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, the scale must be 0.125 and the zero point must be 0.
* 2: A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape
[num_output_rois], specifying the class of each output box. The sequential order of the boxes corresponds with output0.
* 3: A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape
[num_output_rois], specifying the batch index of each box. Boxes with the same batch index are grouped together.
Available since API level 29.
### ANEURALNETWORKS_CAST
Casts a tensor to a type.
This operation ignores the scale and zeroPoint of quanized tensors,
e.g. it treats a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} input as a tensor of uint8 values.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
Since API level 30, casting tensors of the following
{@link OperandCode} to the same {@link OperandCode} is supported:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
Supported tensor rank: from 1
Inputs:
* 0: A tensor.
Outputs:
* 0: A tensor with the same shape as input0.
Available since API level 29.
### ANEURALNETWORKS_CHANNEL_SHUFFLE
Shuffle the channels of the input tensor.
Given an input tensor and a integer value of num_groups, CHANNEL_SHUFFLE divide the channel dimension into num_groups groups, and reorganize the channels by grouping channels with the same index in each group.
Along the channel dimension, the output is calculated using this formula:
```
output_channel[k * num_groups + g] = input_channel[g * group_size + k]
```
where group_size = num_channels / num_groups
The number of channels must be divisible by num_groups.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor, specifying the tensor to be shuffled.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the number of groups.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the dimension channel shuffle would be performed on. Negative index is used to specify axis from the end (e.g. -1 for the last axis). Must be in the range [-n, n).
Outputs:
* 0: A tensor of the same {@link OperandCode} and same shape as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 29.
### ANEURALNETWORKS_DETECTION_POSTPROCESSING
Apply postprocessing steps to bounding box detections.
Bounding box detections are generated by applying transformation on a set of predefined anchors with the bounding box deltas from bounding box regression. A final step of hard NMS is applied to limit the number of returned boxes.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Inputs:
* 0: A 3-D Tensor of shape [batches, num_anchors, num_classes], specifying the score of each anchor with each class. Class 0 for each
[batches, num_anchors, 0] is background and will be ignored.
* 1: A 3-D Tensor of shape [batches, num_anchors, length_box_encoding], with the first four values in length_box_encoding specifying the bounding box deltas. The box deltas are encoded in the order of [dy, dx, dh, dw],
where dy and dx is the linear-scale relative correction factor for the center position of the bounding box with respect to the width and height,
dh and dw is the log-scale relative correction factor for the width and height. All the entries in length_box_encoding beyond the first four values are ignored in this operation.
* 2: A 2-D Tensor of shape [num_anchors, 4], specifying the shape of each predefined anchor, with format [ctr_y, ctr_x, h, w], where ctr_y and ctr_x are the center position of the box, and h and w are the height and the width.
* 3: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the scaling factor for dy in bounding box deltas.
* 4: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the scaling factor for dx in bounding box deltas.
* 5: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the scaling factor for dh in bounding box deltas.
* 6: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the scaling factor for dw in bounding box deltas.
* 7: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to use regular multi-class NMS algorithm that do NMS separately for each class,
set to false for a faster algorithm that only do one single NMS using the highest class score..
* 8: An {@link ANEURALNETWORKS_INT32} scalar, max_num_detections, specifying the maximum number of boxes for the output. Boxes with the lowest scores are discarded to meet the limit.
* 9: An {@link ANEURALNETWORKS_INT32} scalar, only used when input7 is set to false, specifying the maximum number of classes per detection.
* 10: An {@link ANEURALNETWORKS_INT32} scalar, only used when input7 is set to true, specifying the maximum number of detections when applying NMS algorithm for each single class.
* 11: A scalar, score_threshold. Boxes with scores lower than the threshold are filtered before sending to the NMS algorithm. The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of
{@link ANEURALNETWORKS_TENSOR_FLOAT16} and of
{@link ANEURALNETWORKS_FLOAT32} if input0 is of
{@link ANEURALNETWORKS_TENSOR_FLOAT32}.
* 12: A scalar, specifying the IoU threshold for hard NMS. The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of
{@link ANEURALNETWORKS_TENSOR_FLOAT16} and of
{@link ANEURALNETWORKS_FLOAT32} if input0 is of
{@link ANEURALNETWORKS_TENSOR_FLOAT32}.
* 13: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to include background class in the list of label map for the output, set to false to not include the background. When the background class is included, it has label 0 and the output classes start at 1 in the label map, otherwise, the output classes start at 0.
Outputs:
* 0: A 2-D tensor of the same {@link OperandCode} as input0, with shape
[batches, max_num_detections], specifying the score of each output detections.
* 1: A 3-D tensor of shape [batches, max_num_detections, 4], specifying the coordinates of each output bounding box, with format
[y1, x1, y2, x2].
* 2: A 2-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape
[batches, max_num_detections], specifying the class label for each output detection.
* 3: An 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape [batches],
specifying the number of valid output detections for each batch.
Available since API level 29.
### ANEURALNETWORKS_EQUAL
For input tensors x and y, computes x == y elementwise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
This operation supports broadcasting.
Inputs:
* 0: A tensor.
* 1: A tensor of the same {@link OperandCode} and dimensions compatible with input0.
Outputs:
* 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}.
Available since API level 29.
### ANEURALNETWORKS_EXP
Computes exponential of x element-wise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: from 1.
Inputs:
* 0: A tensor.
Outputs:
* 0: The output tensor of same shape as input0.
Available since API level 29.
### ANEURALNETWORKS_EXPAND_DIMS
Inserts a dimension of 1 into a tensor’s shape.
Given a tensor input, this operation inserts a dimension of 1 at the given dimension index of input’s shape. The dimension index starts at zero; if you specify a negative dimension index, it is counted backward from the end.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: An n-D tensor.
* 1: An {@link ANEURALNETWORKS_INT32} scalar specifying the dimension index to expand. Must be in the range [-(n + 1), (n + 1)).
Outputs:
* 0: An (n + 1)-D tensor with the same {@link OperandCode} and data as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 29.
### ANEURALNETWORKS_GATHER
Gathers values along an axis.
Produces an output tensor with shape input0.dimension[:axis] + indices.dimension + input0.dimension[axis + 1:]
where:
# Vector indices (output is rank(input0)).
output[a_0, …, a_n, i, b_0, …, b_n] =
input0[a_0, …, a_n, indices[i], b_0, …, b_n]
```
output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] =
input0[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n]
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: An n-D tensor from which to gather values.
* 1: An {@link ANEURALNETWORKS_INT32} scalar specifying the axis.
Negative index is used to specify axis from the end
(e.g. -1 for the last axis). Must be in the range [-n, n).
* 2: A k-D tensor {@link ANEURALNETWORKS_TENSOR_INT32} of indices.
The values must be in the bounds of the corresponding dimensions of input0.
Outputs:
* 0: An (n + k - 1)-D tensor with the same {@link OperandCode} as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 29.
### ANEURALNETWORKS_GENERATE_PROPOSALS
Generate aixs-aligned bounding box proposals.
Bounding box proposals are generated by applying transformation on a set of predefined anchors with the bounding box deltas from bounding box regression. A final step of hard NMS is applied to limit the number of returned boxes.
Axis-aligned bounding boxes are represented by its upper-left corner coordinate (x1,y1) and lower-right corner coordinate (x2,y2). A valid bounding box should satisfy x1 <= x2 and y1 <= y2.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Inputs:
* 0: A 4-D Tensor specifying the score of each anchor at each location. With “NHWC” data layout, the tensor shape is
[batches, height, width, num_anchors]. With “NCHW” data layout,
the tensor shape is [batches, num_anchors, height, width].
* 1: A 4-D Tensor specifying the bounding box deltas. With “NHWC” data layout, the tensor shape is [batches, height, width, num_anchors * 4].
With “NCHW” data layout, the tensor shape is
[batches, num_anchors * 4, height, width]. The box deltas are encoded in the order of [dx, dy, dw, dh], where dx and dy is the linear-scale relative correction factor for the center position of the bounding box with respect to the width and height, dw and dh is the log-scale relative correction factor for the width and height. The last dimensions is the channel dimension.
* 2: A 2-D Tensor of shape [num_anchors, 4], specifying the shape of each predefined anchor, with format [x1, y1, x2, y2]. For input0 of type
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, this tensor should be of
{@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}, with scale of 0.125.
* 3: A 2-D Tensor of shape [batches, 2], specifying the size of each image in the batch, with format [image_height, image_width].
For input0 of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, this tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}, with scale of 0.125.
* 4: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio from the height of original image to the height of feature map.
* 5: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio from the width of original image to the width of feature map.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the maximum number of boxes before going into the hard NMS algorithm. Boxes with the lowest scores are discarded to meet the limit. Set to a non-positive value for unlimited number.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the maximum number of boxes returning from the hard NMS algorithm. Boxes with the lowest scores are discarded to meet the limit. Set to a non-positive value for unlimited number.
* 8: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the IoU threshold for hard NMS.
* 9: An {@link ANEURALNETWORKS_FLOAT32} scalar, min_size. Boxes with height or width lower than the absolute threshold are filtered out.
* 10: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify NCHW data layout for input0 and input1. Set to false for NHWC.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0, of shape
[num_output_rois], specifying the score of each output box.
The boxes are grouped by batches, but the sequential order in each batch is not guaranteed. For type of
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, the scale and zero point must be the same as input0.
* 1: A tensor of the same {@link OperandCode} as input3, of shape
[num_output_rois, 4], specifying the coordinates of each output bounding box for each class, with format [x1, y1, x2, y2].
The sequential order of the boxes corresponds with output0.
For type of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, the scale must be 0.125 and the zero point must be 0.
* 2: A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape
[num_output_rois], specifying the batch index of each box. Boxes with the same batch index are grouped together.
Available since API level 29.
### ANEURALNETWORKS_GREATER
For input tensors x and y, computes x > y elementwise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
This operation supports broadcasting.
Inputs:
* 0: A tensor.
* 1: A tensor of the same {@link OperandCode} and dimensions compatible with input0.
Outputs:
* 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}.
Available since API level 29.
### ANEURALNETWORKS_GREATER_EQUAL
For input tensors x and y, computes x >= y elementwise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
This operation supports broadcasting.
Inputs:
* 0: A tensor.
* 1: A tensor of the same {@link OperandCode} and dimensions compatible with input0.
Outputs:
* 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}.
Available since API level 29.
### ANEURALNETWORKS_GROUPED_CONV_2D
Performs a grouped 2-D convolution operation.
Given an input tensor of shape [batches, height, width, depth_in] and a filter tensor of shape [depth_out, filter_height, filter_width, depth_group]
containing depth_out convolutional filters of depth depth_group, GROUPED_CONV applies a group of different filters to each input channel group, then concatenates the results together.
Specifically, the input channels are divided into num_groups groups, each with depth depth_group, i.e. depth_in = num_groups * depth_group. The convolutional filters are also divided into num_groups groups, i.e. depth_out is divisible by num_groups. GROUPED_CONV applies each group of filters to the corresponding input channel group, and the result are concatenated together.
The output dimensions are functions of the filter dimensions, stride, and padding.
The values in the output tensor are computed as:
```
output[b, i, j, g * channel_multiplier + q] =
sum_{di, dj, dk} (
input[b, strides[1] * i + di, strides[2] * j + dj,
g * depth_group + dk] *
filter[g * channel_multiplier + q, di, dj, dk]
) + bias[channel]
```
where channel_multiplier = depth_out / num_groups
Supported tensor {@link OperandCode} configurations:
* 16 bit floating point:
* + {@link ANEURALNETWORKS_TENSOR_FLOAT16} for input, filter, output, and bias.
* 32 bit floating point:
* + {@link ANEURALNETWORKS_TENSOR_FLOAT32} for input, filter, output, and bias.
* Quantized:
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, filter, and output.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to
* + input.scale * filter.scale).
* Quantized signed (since API level 30):
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, filter, and output.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to
* + input.scale * filter.scale).
* Quantized with symmetric per channel quantization for the filter:
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, and output.
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0,
* + each value scaling is separate and equal to input.scale * filter.scales[channel]).
* Quantized signed with filter symmetric per channel quantization (since API level 30):
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, and output.
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0,
* + each value scaling is separate and equal to input.scale * filter.scales[channel]).
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
Both explicit padding and implicit padding are supported.
Inputs (explicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth_in],
specifying the input, where depth_in = num_groups * depth_group.
* 1: A 4-D tensor, of shape
[depth_out, filter_height, filter_width, depth_group], specifying the filter, where depth_out must be divisible by num_groups. For tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}
the channel dimension (channelDim at
{@link ANeuralNetworksSymmPerChannelQuantParams}) must be set to 0.
* 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} or
{@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias must be of the same type.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale == input_scale * filter_scale. For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}, the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale of 0. The actual scale of each value ‘i’ is equal to bias_scale[i] = input_scale * filter_scale[i].
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the left, in the ‘width’ dimension.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the right, in the ‘width’ dimension.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the top, in the ‘height’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the bottom, in the ‘height’ dimension.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 9: An {@link ANEURALNETWORKS_INT32} scalar, specifying the number of groups.
* 10: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 11: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify NCHW data layout for input0 and output0. Set to false for NHWC.
Inputs (implicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth_in],
specifying the input, where depth_in = num_groups * depth_group.
* 1: A 4-D tensor, of shape
[depth_out, filter_height, filter_width, depth_group], specifying the filter, where depth_out must be divisible by num_groups. For tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}
the channel dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim)
must be set to 0.
* 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} or
{@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias must be of the same
{@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias must be of the same type.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale == input_scale * filter_scale. For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}, the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale of 0. The actual scale of each value ‘i’ is equal to bias_scale[i] = input_scale * filter_scale[i].
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit padding scheme, has to be one of the
{@link PaddingCode} values.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the number of groups.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 8: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify NCHW data layout for input0 and output0. Set to false for NHWC.
Outputs:
* 0: The output 4-D tensor, of shape
[batches, out_height, out_width, depth_out].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint can be different from inputs’ scale and zeroPoint.
Available since API level 29.
### ANEURALNETWORKS_HEATMAP_MAX_KEYPOINT
Localize the maximum keypoints from heatmaps.
This operation approximates the accurate maximum keypoint scores and indices after bicubic upscaling by using Taylor expansion up to the quadratic term.
The bounding box is represented by its upper-left corner coordinate
(x1,y1) and lower-right corner coordinate (x2,y2) in the original image.
A valid bounding box should satisfy x1 <= x2 and y1 <= y2.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
Inputs:
* 0: A 4-D Tensor of shape
[num_boxes, heatmap_size, heatmap_size, num_keypoints],
specifying the heatmaps, the height and width of heatmaps should be the same, and must be greater than or equal to 2.
* 1: A 2-D Tensor of shape [num_boxes, 4], specifying the bounding boxes,
each with format [x1, y1, x2, y2]. For input0 of type
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, this tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, with zeroPoint of 0 and scale of 0.125.
For input0 of type
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, this tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, with zeroPoint of -128 and scale of 0.125.
* 2: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify NCHW data layout for input0. Set to false for NHWC.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0, with shape
[num_boxes, num_keypoints], specifying score of the keypoints.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint can be different from input0 scale and zeroPoint.
* 1: A tensor of the same {@link OperandCode} as input1, with shape
[num_boxes, num_keypoints, 2], specifying the location of the keypoints, the second dimension is organized as
[keypoint_x, keypoint_y].
For type of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, the scale must be 0.125 and the zero point must be 0.
Available since API level 29.
### ANEURALNETWORKS_INSTANCE_NORMALIZATION
Applies instance normalization to the input tensor.
The values in the output tensor are computed as:
```
output[b, h, w, c] =
(input[b, h, w, c] - mean[b, c]) * gamma /
sqrt(var[b, c] + epsilon) + beta
```
Where the mean and variance are computed across the spatial dimensions:
```
mean[b, c] =
sum_{h, w}(input[b, h, w, c]) / sum(1)
var[b, c] =
sum_{h, w}(pow(input[b, h, w, c] - mean[b, c], 2)) / sum(1)
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
Inputs:
* 0: An n-D tensor, specifying the tensor to be normalized.
* 1: A scalar, specifying gamma, the scale applied to the normalized tensor. The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of
{@link ANEURALNETWORKS_FLOAT32} if input0 is of
{@link ANEURALNETWORKS_TENSOR_FLOAT32}.
* 2: A scalar, specifying beta, the offset applied to the normalized tensor. The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of
{@link ANEURALNETWORKS_FLOAT32} if input0 is of
{@link ANEURALNETWORKS_TENSOR_FLOAT32}.
* 3: A scalar, specifying epsilon, the small value added to variance to avoid dividing by zero. The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of
{@link ANEURALNETWORKS_FLOAT32} if input0 is of
{@link ANEURALNETWORKS_TENSOR_FLOAT32}.
* 4: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify NCHW data layout for input0 and output0. Set to false for NHWC.
Outputs:
* 0: A tensor of the same {@link OperandCode} and same shape as input0.
Available since API level 29.
### ANEURALNETWORKS_LESS
For input tensors x and y, computes x < y elementwise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
This operation supports broadcasting.
Inputs:
* 0: A tensor.
* 1: A tensor of the same {@link OperandCode} and dimensions compatible with input0.
Outputs:
* 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}.
Available since API level 29.
### ANEURALNETWORKS_LESS_EQUAL
For input tensors x and y, computes x <= y elementwise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
This operation supports broadcasting.
Inputs:
* 0: A tensor.
* 1: A tensor of the same {@link OperandCode} and dimensions compatible with input0.
Outputs:
* 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}.
Available since API level 29.
### ANEURALNETWORKS_LOG
Computes natural logarithm of x element-wise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: from 1.
Inputs:
* 0: A tensor.
Outputs:
* 0: The output tensor of same shape as input0.
Available since API level 29.
### ANEURALNETWORKS_LOGICAL_AND
Returns the truth value of x AND y element-wise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
Supported tensor rank: from 1
This operation supports broadcasting.
Inputs:
* 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}.
* 1: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8} and dimensions compatible with input0.
Outputs:
* 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}.
Available since API level 29.
### ANEURALNETWORKS_LOGICAL_NOT
Computes the truth value of NOT x element-wise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
Supported tensor rank: from 1.
Inputs:
* 0: A tensor.
Outputs:
* 0: The output tensor of same shape as input0.
Available since API level 29.
### ANEURALNETWORKS_LOGICAL_OR
Returns the truth value of x OR y element-wise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
Supported tensor rank: from 1
This operation supports broadcasting.
Inputs:
* 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}.
* 1: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8} and dimensions compatible with input0.
Outputs:
* 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}.
Available since API level 29.
### ANEURALNETWORKS_LOG_SOFTMAX
Computes the log softmax activations given logits.
The output is calculated using this formula:
```
output = logits * beta - log(reduce_sum(exp(logits * beta), axis))
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: from 1.
Inputs:
* 0: A tensor specifying the input logits.
* 1: A scalar, specifying the positive scaling factor for the exponent,
beta.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the beta value must be of {@link ANEURALNETWORKS_FLOAT16}.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the beta value must be of {@link ANEURALNETWORKS_FLOAT32}.
* 2: An {@link ANEURALNETWORKS_INT32} scalar specifying the axis to reduce across. Negative index is used to specify axis from the end (e.g. -1 for the last axis). Must be in the range [-n, n).
Outputs:
* 0: The output tensor of the same {@link OperandCode} and shape as input0.
Available since API level 29.
### ANEURALNETWORKS_MAXIMUM
Returns the element-wise maximum of two tensors.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1.
Inputs:
* 0: A tensor.
* 1: A tensor of the same {@link OperandCode} and compatible dimensions with input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor,
the scales and zeroPoint can be different from input0 scale and zeroPoint.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor,
the scale and zeroPoint can be different from inputs’ scale and zeroPoint.
Available since API level 29.
### ANEURALNETWORKS_MINIMUM
Returns the element-wise minimum of two tensors.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1.
Inputs:
* 0: A tensor.
* 1: A tensor of the same {@link OperandCode} and compatible dimensions with input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor,
the scales and zeroPoint can be different from input0 scale and zeroPoint.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor,
the scale and zeroPoint can be different from inputs’ scale and zeroPoint.
Available since API level 29.
### ANEURALNETWORKS_NEG
Computes numerical negative value element-wise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
Supported tensor rank: from 1.
Inputs:
* 0: A tensor.
Outputs:
* 0: The output tensor of same shape as input0.
Available since API level 29.
### ANEURALNETWORKS_NOT_EQUAL
For input tensors x and y, computes x != y elementwise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
This operation supports broadcasting.
Inputs:
* 0: A tensor.
* 1: A tensor of the same {@link OperandCode} and dimensions compatible with input0.
Outputs:
* 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}.
Available since API level 29.
### ANEURALNETWORKS_PAD_V2
Pads a tensor with the given constant value according to the specified paddings.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor, specifying the tensor to be padded.
* 1: A 2-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the paddings for each spatial dimension of the input tensor. The shape of the tensor must be {rank(input0), 2}.
padding[i, 0] specifies the number of elements to be padded in the front of dimension i.
padding[i, 1] specifies the number of elements to be padded after the end of dimension i.
* 2: An scalar specifying the value to use for padding input0.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the pad value must be of {@link ANEURALNETWORKS_FLOAT16}.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the pad value must be of {@link ANEURALNETWORKS_FLOAT32}.
For input tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the pad value must be of {@link ANEURALNETWORKS_INT32}. The scale and zeroPoint are assumed to be the same as in input0.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0. The output tensor has the same rank as input0, and each dimension of the output tensor has the same size as the corresponding dimension of the input tensor plus the size of the padding:
output0.dimension[i] =
padding[i, 0] + input0.dimension[i] + padding[i, 1]
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 29.
### ANEURALNETWORKS_POW
Computes the power of one value to another.
Given a tensor base and a tensor exponent, this operation computes base^exponent elementwise.
This operations supports broadcasting. The size of the output is the maximum size along each dimension of the input operands. It starts with the trailing dimensions, and works its way forward.
For example:
base.dimension = {4, 1, 2}
exponent.dimension = {5, 4, 3, 1}
output.dimension = {5, 4, 3, 2}
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: from 1
Inputs:
* 0: A tensor specifying the base.
* 1: A tensor specifying the exponent.
Outputs:
* 0: An output tensor.
Available since API level 29.
### ANEURALNETWORKS_PRELU
Parametric Rectified Linear Unit.
It follows: f(x) = alpha * x for x < 0, f(x) = x for x >= 0, where alpha is a learned array with the same {@link OperandCode} and compatible dimensions as input x.
Two dimensions are compatible when:
1. they are equal, or 2. one of them is 1
The size of the output is the maximum size along each dimension of the input operands. It starts with the trailing dimensions, and works its way forward.
Example:
input.dimension = {4, 1, 2}
alpha.dimension = {5, 4, 3, 1}
output.dimension = {5, 4, 3, 2}
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: A tensor, specifying the input.
* 1: A tensor of the same {@link OperandCode}, and compatible dimensions as input0, specifying the alpha.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scales and zeroPoint can be different from input0 scale and zeroPoint.
Available since API level 29.
### ANEURALNETWORKS_QUANTIZE
Quantizes the input tensor.
The formula for {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} output tensor is:
```
output = max(0, min(255, round(input / scale) + zeroPoint)
```
The formula for {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} output tensor is:
```
output = max(-128, min(127, round(input / scale) + zeroPoint)
```
Supported input tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported output tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: A tensor, may be zero-sized.
Outputs:
* 0: The output tensor of same shape as input0, but with
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or.
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}.
Available since API level 29.
### ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
A version of quantized LSTM, using 16 bit quantization for internal state.
There is no projection layer, so cell state size is equal to the output size.
Inputs:
* 0: A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [numBatches, inputSize] specifying the input to the LSTM cell. Tensor is quantized with a fixed quantization range of
[-1, 127/128] (scale = 1/128, zeroPoint = 128).
* 1: The input-to-input weights.
A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [outputSize, inputSize] specifying input-to-input part of weights for fully-connected layer inside the LSTM cell.
Quantization zero point and scale must be the same across all the weights.
* 2: The input-to-forget weights.
A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [outputSize, inputSize] specifying input-to-forget part of weights for fully-connected layer inside the LSTM cell.
Quantization zero point and scale must be the same across all the weights.
* 3: The input-to-cell weights.
A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [outputSize, inputSize] specifying input-to-cell part of weights for fully-connected layer inside the LSTM cell.
Quantization zero point and scale must be the same across all the weights.
* 4: The input-to-output weights.
A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [outputSize, inputSize] specifying input-to-output part of weights for fully-connected layer inside the LSTM cell.
Quantization zero point and scale must be the same across all the weights.
* 5: The recurrent-to-input weights.
A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [outputSize, outputSize] specifying recurrent-to-input part of weights for fully-connected layer inside the LSTM cell.
Quantization zero point and scale must be the same across all the weights.
* 6: The recurrent-to-forget weights.
A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [outputSize, outputSize] specifying recurrent-to-forget part of weights for fully-connected layer inside the LSTM cell.
Quantization zero point and scale must be the same across all the weights.
* 7: The recurrent-to-cell weights.
A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [outputSize, outputSize] specifying recurrent-to-cell part of weights for fully-connected layer inside the LSTM cell.
Quantization zero point and scale must be the same across all the weights.
* 8: The recurrent-to-output weights.
A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [outputSize, outputSize] specifying recurrent-to-output part of weights for fully-connected layer inside the LSTM cell.
Quantization zero point and scale must be the same across all the weights.
* 9: The input gate bias.
A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} and shape
[outputSize] specifying the bias for the fully-connected layer inside the LSTM cell. Bias is quantized with scale being a product of input and weights scales and zeroPoint equal to 0.
* 10:The forget gate bias.
A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} and shape
[outputSize] specifying the bias for the fully-connected layer inside the LSTM cell. Bias is quantized with scale being a product of input and weights scales and zeroPoint equal to 0.
* 11:The cell bias.
A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} and shape
[outputSize] specifying the bias for the fully-connected layer inside the LSTM cell. Bias is quantized with scale being a product of input and weights scales and zeroPoint equal to 0.
* 12:The output gate bias.
A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} and shape
[outputSize] specifying the bias for the fully-connected layer inside the LSTM cell. Bias is quantized with scale being a product of input and weights scales and zeroPoint equal to 0.
* 13: A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
and shape [numBatches, outputSize] specifying the cell state from the previous time step of the LSTM cell. It is quantized using a quantization range of [-2^4, 2^4 * 32767/32768] (scale = 2^4 /
32768, zeroPoint = 0).
* 14: A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [numBathes, outputSize] specifying the output of the LSTM cell from previous time-step. Tensor is quantized with a fixed quantization range of [-1, 127/128] (scale = 1/128, zeroPoint =
128).
Outputs:
* 0: A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
and shape [numBatches, outputSize] which contains a cell state from the current time step. Tensor is quantized using a quantization range of [-2^4, 2^4 * 32767/32768] (scale = 2^4 / 32768, zeroPoint =
0).
* 1: A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and shape [numBathes, outputSize] which contains the output value.
Tensor is quantized with a fixed quantization range of [-1, 127/128]
(scale = 1/128, zeroPoint = 128).
### ANEURALNETWORKS_RANDOM_MULTINOMIAL
Draws samples from a multinomial distribution.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Inputs:
* 0: A 2-D tensor with shape [batches, classes], specifying the unnormalized log-probabilities for all classes.
* 1: A scalar {@link ANEURALNETWORKS_INT32}, specifying the number of independent samples to draw for each row slice.
* 2: A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor with shape [2],
specifying seeds used to initialize the random distribution. If both provided seeds are 0, both will be randomly generated.
Outputs:
* 0: A 2-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor with shape
[batches, samples], containing the drawn samples.
Available since API level 29.
### ANEURALNETWORKS_REDUCE_ALL
Reduces a tensor by computing the “logical and” of elements along given dimensions.
If keep_dims is true, the reduced dimensions are retained with length 1. Otherwise, the rank of the tensor is reduced by 1 for each entry in dimensions.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor.
* 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions to reduce. Dimension values must be in the range [-n, n).
* 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true,
retains reduced dimensions with length 1.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
If all dimensions are reduced and keep_dims is false, the output shape is [1].
Available since API level 29.
### ANEURALNETWORKS_REDUCE_ANY
Reduces a tensor by computing the “logical or” of elements along given dimensions.
If keep_dims is true, the reduced dimensions are retained with length 1. Otherwise, the rank of the tensor is reduced by 1 for each entry in dimensions.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor.
* 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions to reduce. Dimension values must be in the range [-n, n).
* 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true,
retains reduced dimensions with length 1.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
If all dimensions are reduced and keep_dims is false, the output shape is [1].
Available since API level 29.
### ANEURALNETWORKS_REDUCE_MAX
Reduces a tensor by computing the maximum of elements along given dimensions.
If keep_dims is true, the reduced dimensions are retained with length 1. Otherwise, the rank of the tensor is reduced by 1 for each entry in dimensions.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor.
* 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions to reduce. Dimension values must be in the range [-n, n).
* 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true,
retains reduced dimensions with length 1.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
If all dimensions are reduced and keep_dims is false, the output shape is [1].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 29.
### ANEURALNETWORKS_REDUCE_MIN
Reduces a tensor by computing the minimum of elements along given dimensions.
If keep_dims is true, the reduced dimensions are retained with length 1. Otherwise, the rank of the tensor is reduced by 1 for each entry in dimensions.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor.
* 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions to reduce. Dimension values must be in the range [-n, n).
* 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true,
retains reduced dimensions with length 1.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
If all dimensions are reduced and keep_dims is false, the output shape is [1].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 29.
### ANEURALNETWORKS_REDUCE_PROD
Reduces a tensor by multiplying elements along given dimensions.
If keep_dims is true, the reduced dimensions are retained with length 1. Otherwise, the rank of the tensor is reduced by 1 for each entry in dimensions.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor.
* 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions to reduce. Dimension values must be in the range [-n, n).
* 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true,
retains reduced dimensions with length 1.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
If all dimensions are reduced and keep_dims is false, the output shape is [1].
Available since API level 29.
### ANEURALNETWORKS_REDUCE_SUM
Reduces a tensor by summing elements along given dimensions.
If keep_dims is true, the reduced dimensions are retained with length 1. Otherwise, the rank of the tensor is reduced by 1 for each entry in dimensions.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: up to 4
Inputs:
* 0: An n-D tensor.
* 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions to reduce. Dimension values must be in the range [-n, n).
* 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true,
retains reduced dimensions with length 1.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0.
If all dimensions are reduced and keep_dims is false, the output shape is [1].
Available since API level 29.
### ANEURALNETWORKS_ROI_ALIGN
Select and scale the feature map of each region of interest to a unified output size by average pooling sampling points from bilinear interpolation.
The region of interest is represented by its upper-left corner coordinate
(x1,y1) and lower-right corner coordinate (x2,y2) in the original image.
A spatial scaling factor is applied to map into feature map coordinate.
A valid region of interest should satisfy x1 <= x2 and y1 <= y2.
No rounding is applied in this operation. The sampling points are unified distributed in the pooling bin and their values are calculated by bilinear interpolation.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
Inputs:
* 0: A 4-D tensor, specifying the feature map.
* 1: A 2-D Tensor of shape [num_rois, 4], specifying the locations of the regions of interest, each line with format [x1, y1, x2, y2].
For input0 of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
this tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM},
with zeroPoint of 0 and scale of 0.125. Zero num_rois is supported for this tensor.
* 2: An 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape
[num_rois], specifying the batch index of each box. Boxes with the same batch index are grouped together. Zero num_rois is supported for this tensor.
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output height of the output tensor.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output width of the output tensor.
* 5: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio from the height of original image to the height of feature map.
* 6: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio from the width of original image to the width of feature map.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the number of sampling points in height dimension used to compute the output.
Set to 0 for adaptive value of ceil(roi_height/out_height).
* 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the number of sampling points in width dimension used to compute the output.
Set to 0 for adaptive value of ceil(roi_width/out_width).
* 9: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify NCHW data layout for input0 and output0. Set to false for NHWC.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0. The output shape is [num_rois, out_height, out_width, depth].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint can be different from the input0 scale and zeroPoint.
Available since API level 29.
### ANEURALNETWORKS_ROI_POOLING
Select and scale the feature map of each region of interest to a unified output size by max-pooling.
The region of interest is represented by its upper-left corner coordinate
(x1,y1) and lower-right corner coordinate (x2,y2) in the original image.
A spatial scaling factor is applied to map into feature map coordinate.
A valid region of interest should satisfy x1 <= x2 and y1 <= y2.
Rounding is applied in this operation to ensure integer boundary for regions of interest and pooling bins.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
Inputs:
* 0: A 4-D tensor, specifying the feature map.
* 1: A 2-D Tensor of shape [num_rois, 4], specifying the locations of the regions of interest, each line with format [x1, y1, x2, y2].
For input0 of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
this tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM},
with zeroPoint of 0 and scale of 0.125.
* 2: An 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape
[num_rois], specifying the batch index of each box. Boxes with the same batch index are grouped together.
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output height of the output tensor.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output width of the output tensor.
* 5: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio from the height of original image to the height of feature map.
* 6: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio from the width of original image to the width of feature map.
* 7: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify NCHW data layout for input0 and output0. Set to false for NHWC.
Outputs:
* 0: A tensor of the same {@link OperandCode} as input0. The output shape is [num_rois, out_height, out_width, depth].
For input0 of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 29.
### ANEURALNETWORKS_RSQRT
Computes reciprocal of square root of x element-wise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: from 1.
Inputs:
* 0: A tensor.
Outputs:
* 0: The output tensor of same shape as input0.
Available since API level 29.
### ANEURALNETWORKS_SELECT
Using a tensor of booleans c and input tensors x and y select values elementwise from both input tensors:
O[i] = C[i] ? x[i] : y[i].
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: A tensor of type {@link ANEURALNETWORKS_TENSOR_BOOL8} acting as a mask that chooses, based on the value at each element, whether the corresponding element in the output should be taken from input1 (if true) or input2 (if false).
* 1: An input tensor of the same shape as input0.
* 2: An input tensor of the same shape and type as input1.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scales and zeroPoint can be different from input1 scale and zeroPoint.
Outputs:
* 0: A tensor of the same type and shape as input1 and input2.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor,
the scale and zeroPoint can be different from inputs’ scale and zeroPoint.
Available since API level 29.
### ANEURALNETWORKS_SIN
Computes sin of x element-wise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: from 1.
Inputs:
* 0: A tensor.
Outputs:
* 0: The output tensor of same shape as input0.
Available since API level 29.
### ANEURALNETWORKS_SLICE
Extracts a slice of specified size from the input tensor starting at a specified location.
The starting location is specified as a 1-D tensor containing offsets for each dimension. The size is specified as a 1-D tensor containing either size of a slice along corresponding dimension or -1. In the latter case, all the remaining elements in dimension are included in the slice.
A sum of begin offset and a size of a slice must not exceed size of a corresponding dimension.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: An n-D tensor to take slice from, may be zero-sized.
* 1: A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} specifying the beginning indices of the slice in each dimension.
* 2: A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} specifying the size of the slice in each dimension.
Outputs:
* 0: An n-D tensor of the same type as the input containing the slice.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
its scale and zeroPoint has to be same as the input0 scale and zeroPoint.
Available since API level 29.
### ANEURALNETWORKS_SPLIT
Splits a tensor along a given axis into num_splits subtensors.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: An n-D tensor to split.
* 1: An {@link ANEURALNETWORKS_INT32} scalar specifying the axis along which to split.
* 2: An {@link ANEURALNETWORKS_INT32} scalar indicating the number of splits along given axis. Must evenly divide axis size.
Outputs:
* 0 ~ (num_splits - 1): Resulting subtensors.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 29.
### ANEURALNETWORKS_SQRT
Computes square root of x element-wise.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: from 1.
Inputs:
* 0: A tensor.
Outputs:
* 0: The output tensor of same shape as input0.
Available since API level 29.
### ANEURALNETWORKS_TILE
Constructs a tensor by tiling a given tensor.
This operation creates a new tensor by replicating `input` `multiples`
times. The output tensor’s i-th dimension has `input.dims(i) * multiples[i]`
elements, and the values of `input` are replicated `multiples[i]` times along the i-th dimension.
For example, tiling `[a b c d]` by `[2]` produces `[a b c d a b c d]`.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: input, an n-D tensor specifying the input.
* 1: multiples, a 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}.
The length of multiples must be n.
Outputs:
* 0: A tiled tensor of the same {@link OperandCode} and rank as `input`.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 29.
### ANEURALNETWORKS_TOPK_V2
Finds values and indices of the k largest entries for the last dimension.
Resulting values in each dimensions are sorted in descending order. If two values are equal, the one with larger index appears first.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: from 1
Inputs:
* 0: input, an n-D tensor specifying the input.
* 1: k, an {@link ANEURALNETWORKS_INT32} scalar, specifying the number of top elements to look for along the last dimension.
Outputs:
* 0: An n-D tensor of the same type as the input, containing the k largest elements along each last dimensional slice.
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
* 1: An n-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32}
containing the indices of values within the last dimension of input.
Available since API level 29.
### ANEURALNETWORKS_TRANSPOSE_CONV_2D
Performs the transpose of 2-D convolution operation.
This operation is sometimes called “deconvolution” after Deconvolutional Networks, but is actually the transpose (gradient) of
{@link ANEURALNETWORKS_CONV_2D} rather than an actual deconvolution.
The output dimensions are functions of the filter dimensions, stride, and padding.
Supported tensor {@link OperandCode} configurations:
* 16 bit floating point:
* + {@link ANEURALNETWORKS_TENSOR_FLOAT16} for input, filter, output, and bias.
* 32 bit floating point:
* + {@link ANEURALNETWORKS_TENSOR_FLOAT32} for input, filter, output, and bias.
* Quantized:
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, filter, and output.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to
* + input.scale * filter.scale).
* Quantized with symmetric per channel quantization for the filter:
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, and output.
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0,
* + each value scaling is separate and equal to input.scale * filter.scales[channel]).
Available since API level 30:
* Quantized signed (since API level 30):
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, filter, and output.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to
* + input.scale * filter.scale).
* Quantized signed with filter symmetric per channel quantization (since API level 30):
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, and output.
* + {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* + {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0,
* + each value scaling is separate and equal to input.scale * filter.scales[channel]).
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
Both explicit padding and implicit padding are supported.
Inputs (explicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth_in],
specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: A 4-D tensor, of shape
[depth_out, filter_height, filter_width, depth_in], specifying the filter. For tensor of type
{@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} the channel dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim) must be set to 0.
* 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} or
{@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias must be of the same type.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32},
with zeroPoint of 0 and bias_scale == input_scale * filter_scale.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL},
the bias must be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale of 0. The actual scale of each value ‘i’ is equal to bias_scale[i] = input_scale * filter_scale[i].
* 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the left, in the ‘width’ dimension.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the right, in the ‘width’ dimension.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the top, in the ‘height’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on the bottom, in the ‘height’ dimension.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 10: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify NCHW data layout for input0 and output0. Set to false for NHWC.
Inputs (implicit padding):
* 0: A 4-D tensor, of shape [batches, height, width, depth_in],
specifying the input.
Since API level 29, zero batches is supported for this tensor.
* 1: A 4-D tensor, of shape
[depth_out, filter_height, filter_width, depth_in], specifying the filter. For tensor of type
{@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} the channel dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim) must be set to 0.
* 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} or
{@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias should be of the same type.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED},
the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32},
with zeroPoint of 0 and bias_scale == input_scale * filter_scale.
For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL},
the bias must be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 and bias_scale of 0. The actual scale of each value ‘i’ is equal to bias_scale[i] = input_scale * filter_scale[i].
* 3: An {@link ANEURALNETWORKS_TENSOR_INT32} tensor, specifying the output tensor shape.
* 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit padding scheme, has to be one of the
{@link PaddingCode} values.
* 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘width’ dimension.
* 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when walking through input in the ‘height’ dimension.
* 7: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
{@link FuseCode} values. Specifies the activation to invoke on the result.
* 8: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify NCHW data layout for input0 and output0. Set to false for NHWC.
Outputs:
* 0: The output 4-D tensor, of shape
[batches, out_height, out_width, depth_out].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint can be different from inputs’ scale and zeroPoint.
Available since API level 29.
### ANEURALNETWORKS_UNIDIRECTIONAL_SEQUENCE_LSTM
A recurrent neural network specified by an LSTM cell.
Performs (fully) dynamic unrolling of input.
This Op unrolls the input along the time dimension, and implements the following operation for each element in the sequence s = 1…sequence_length:
outputs[s] = projection(state = activation(LSTMOp(inputs[s])))
Where LSTMOp is the LSTM op as in {@link ANEURALNETWORKS_LSTM},
the “projection” is an optional projection layer from state and output and the “activation” is the function passed as the
“fused_activation_function” argument (if not “NONE”).
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: 3, either time-major or batch-major.
All input and output tensors must be of the same type.
Inputs:
* 0: The input (\f$x_t\f$).
A 3-D tensor of shape:
If time-major: [max_time, batch_size, input_size]
If batch-major: [batch_size, max_time, input_size]
where “max_time” is the number of timesteps (sequence length),
“batch_size” corresponds to the batching dimension, and
“input_size” is the size of the input.
* 1: The input-to-input weights (\f$W_{xi}\f$). Optional.
A 2-D tensor of shape [num_units, input_size], where “num_units”
corresponds to the number of cell units.
* 2: The input-to-forget weights (\f$W_{xf}\f$).
A 2-D tensor of shape [num_units, input_size].
* 3: The input-to-cell weights (\f$W_{xc}\f$).
A 2-D tensor of shape [num_units, input_size].
* 4: The input-to-output weights (\f$W_{xo}\f$).
A 2-D tensor of shape [num_units, input_size].
* 5: The recurrent-to-input weights (\f$W_{hi}\f$). Optional.
A 2-D tensor of shape [num_units, output_size], where “output_size”
corresponds to either the number of cell units (i.e., “num_units”),
or the second dimension of the “projection_weights”, if defined.
* 6: The recurrent-to-forget weights (\f$W_{hf}\f$).
A 2-D tensor of shape [num_units, output_size].
* 7: The recurrent-to-cell weights (\f$W_{hc}\f$).
A 2-D tensor of shape [num_units, output_size].
* 8: The recurrent-to-output weights (\f$W_{ho}\f$).
A 2-D tensor of shape [num_units, output_size].
* 9: The cell-to-input weights (\f$W_{ci}\f$). Optional.
A 1-D tensor of shape [num_units].
* 10:The cell-to-forget weights (\f$W_{cf}\f$). Optional.
A 1-D tensor of shape [num_units].
* 11:The cell-to-output weights (\f$W_{co}\f$). Optional.
A 1-D tensor of shape [num_units].
* 12:The input gate bias (\f$b_i\f$). Optional.
A 1-D tensor of shape [num_units].
* 13:The forget gate bias (\f$b_f\f$).
A 1-D tensor of shape [num_units].
* 14:The cell bias (\f$b_c\f$).
A 1-D tensor of shape [num_units].
* 15:The output gate bias (\f$b_o\f$).
A 1-D tensor of shape [num_units].
* 16:The projection weights (\f$W_{proj}\f$). Optional.
A 2-D tensor of shape [output_size, num_units].
* 17:The projection bias (\f$b_{proj}\f$). Optional.
A 1-D tensor of shape [output_size].
* 18:The output state (in) (\f$h_{t-1}\f$).
A 2-D tensor of shape [batch_size, output_size].
* 19:The cell state (in) (\f$C_{t-1}\f$).
A 2-D tensor of shape [batch_size, num_units].
* 20:The activation function (\f$g\f$).
A value indicating the activation function:
+ 0: None;
+ 1: Relu;
+ 3: Relu6;
+ 4: Tanh;
+ 6: Sigmoid.
* 21:The clipping threshold (\f$t_{cell}\f$) for the cell state, such that values are bound within [-cell_clip, cell_clip]. If set to 0.0 then clipping is disabled.
* 22:The clipping threshold (\f$t_{proj}\f$) for the output from the projection layer, such that values are bound within
[-proj_clip, proj_clip]. If set to 0.0 then clipping is disabled.
* 23:Time-major if true, batch-major if false.
* 24:The input layer normalization weights. Optional.
A 1-D tensor of shape [num_units]. Used to rescale normalized inputs to activation at input gate.
* 25:The forget layer normalization weights. Optional.
A 1-D tensor of shape [num_units]. Used to rescale normalized inputs to activation at forget gate.
* 26:The cell layer normalization weights. Optional.
A 1-D tensor of shape [num_units]. Used to rescale normalized inputs to activation at cell gate.
* 27:The output layer normalization weights. Optional.
A 1-D tensor of shape [num_units]. Used to rescale normalized inputs to activation at output gate.
Outputs:
* 0: The output (\f$o_t\f$).
A 3-D tensor of shape:
If time-major: [max_time, batch_size, output_size]
If batch-major: [batch_size, max_time, output_size]
* 1: A tensor of shape [batch_size, output_size] containing a hidden state from the last time step in the sequence. This output is optional and can be omitted. If this output is present then output #2 must be present as well.
Available since API level 30.
* 2: A tensor of shape [batch_size, cell_size] containing a cell state from the last time step in the sequence. This output is optional and can be omitted.
Available since API level 30.
Available since API level 29.
Important: As of API level 29, there is no way to get the output state tensors out and NNAPI does not maintain internal states. This operator does not support the usage pattern in which multiple cells are chained and state tensors are propagated.
### ANEURALNETWORKS_UNIDIRECTIONAL_SEQUENCE_RNN
A recurrent neural network layer that applies a basic RNN cell to a sequence of inputs.
This layer unrolls the input along the sequence dimension, and implements the following operation for each element in the sequence s = 1…sequence_length:
outputs[s] = state = activation(inputs[s] * input_weights’ + state *
recurrent_weights’ + bias)
Where:
* “input_weights” is a weight matrix that multiplies the inputs;
* “recurrent_weights” is a weight matrix that multiplies the current
“state” which itself is the output from the previous time step computation;
* “bias” is a bias vector (added to each output vector in the batch);
* “activation” is the function passed as the “fused_activation_function”
argument (if not “NONE”).
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
The input tensors must all be the same type.
Inputs:
* 0: input.
A 3-D tensor. The shape is defined by the input 6 (timeMajor). If it is set to 1, then the input has a shape [maxTime, batchSize,
inputSize], otherwise the input has a shape [batchSize, maxTime,
inputSize].
* 1: weights.
A 2-D tensor of shape [numUnits, inputSize].
* 2: recurrent_weights.
A 2-D tensor of shape [numUnits, numUnits].
* 3: bias.
A 1-D tensor of shape [numUnits].
* 4: hidden state A 2-D tensor of shape [batchSize, numUnits]. Specifies a hidden state input for the first time step of the computation.
* 5: fusedActivationFunction.
A {@link FuseCode} value indicating the activation function. If
“NONE” is specified then it results in a linear activation.
* 6: timeMajor An {@link ANEURALNETWORKS_INT32} scalar specifying the shape format of input and output tensors. Must be set to either 0 or 1.
Outputs:
* 0: output.
A 3-D tensor. The shape is defined by the input 6 (timeMajor). If it is set to 1, then the output has a shape [maxTime, batchSize,
numUnits], otherwise the output has a shape [batchSize, maxTime,
numUnits].
* 1: A tensor of shape [batchSize, numUnits] containing hidden state from the last time step in the sequence. This output is optional and can be omitted.
Available since API level 30.
Available since API level 29.
Important: As of API level 29, there is no way to get the output state tensors out and NNAPI does not maintain internal states. This operator does not support the usage pattern in which multiple cells are chained and state tensors are propagated.
### ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR
Resizes images to given size using the nearest neighbor interpretation.
Resized images must be distorted if their output aspect ratio is not the same as input aspect ratio. The corner pixels of output may not be the same as corner pixels of input.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since API level 30)
Supported tensor rank: 4, with “NHWC” or “NCHW” data layout.
With the default data layout NHWC, the data is stored in the order of:
[batch, height, width, channels]. Alternatively, the data layout could be NCHW, the data storage order of: [batch, channels, height, width].
Both resizing by shape and resizing by scale are supported.
Inputs (resizing by shape):
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input. Zero batches is supported for this tensor.
* 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output width of the output tensor.
* 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output height of the output tensor.
* 3: An {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
* 4: Align corners. An optional {@link ANEURALNETWORKS_BOOL}
scalar, default to false. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels.
Available since API level 30.
* 5: Half pixel centers. An optional {@link ANEURALNETWORKS_BOOL}
scalar, default to false. If True, the pixel centers are assumed to be at (0.5, 0.5). This is the default behavior of image.resize in TF 2.0. If this parameter is True, then align_corners parameter must be False.
Available since API level 30.
Inputs (resizing by scale):
* 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input. Zero batches is supported for this tensor.
* 1: A scalar, specifying width_scale, the scaling factor of the width dimension from the input tensor to the output tensor. The output width is calculated as new_width = floor(width * width_scale).
The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of
{@link ANEURALNETWORKS_FLOAT32} otherwise.
* 2: A scalar, specifying height_scale, the scaling factor of the height dimension from the input tensor to the output tensor. The output height is calculated as new_height = floor(height * height_scale).
The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of
{@link ANEURALNETWORKS_FLOAT32} otherwise.
* 3: An {@link ANEURALNETWORKS_BOOL} scalar, default to false.
Set to true to specify NCHW data layout for input0 and output0.
* 4: Align corners. An optional {@link ANEURALNETWORKS_BOOL}
scalar, default to false. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels.
Available since API level 30.
* 5: Half pixel centers. An optional {@link ANEURALNETWORKS_BOOL}
scalar, default to false. If True, the pixel centers are assumed to be at (0.5, 0.5). This is the default behavior of image.resize in TF 2.0. If this parameter is True, then align_corners parameter must be False.
Available since API level 30.
Outputs:
* 0: The output 4-D tensor, of shape
[batches, new_height, new_width, depth].
For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and
{@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor,
the scale and zeroPoint must be the same as input0.
Available since API level 29.
### ANEURALNETWORKS_QUANTIZED_LSTM
Quantized version of {@link ANEURALNETWORKS_LSTM}.
The input and the output use asymmetric quantized types, while the rest use symmetric ones.
Inputs:
* 0: The input to the LSTM cell.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}
Shape: [batchSize, inputSize]
* 1: The input-to-input weights. Optional.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
Shape: [numUnits, inputSize]
* 2: The input-to-forget weights.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
Shape: [numUnits, inputSize]
* 3: The input-to-cell weights.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
Shape: [numUnits, inputSize]
* 4: The input-to-output weights.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
Shape: [numUnits, inputSize]
* 5: The recurrent-to-input weights. Optional.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
Shape: [numUnits, outputSize]
* 6: The recurrent-to-forget weights.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
Shape: [numUnits, outputSize]
* 7: The recurrent-to-cell weights.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
Shape: [numUnits, outputSize]
* 8: The recurrent-to-output weights.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
Shape: [numUnits, outputSize]
* 9: The cell-to-input weights (for peephole). Optional.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
Shape: [numUnits]
* 10: The cell-to-forget weights (for peephole). Optional.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
Shape: [numUnits]
* 11: The cell-to-output weights (for peephole). Optional.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
Shape: [numUnits]
* 12: The input gate bias. Quantized with scale being the product of input and weights scales and zeroPoint equal to 0.
Optional.
Type: {@link ANEURALNETWORKS_TENSOR_INT32}
Shape: [numUnits]
* 13: The forget gate bias. Quantized with scale being the product of input and weights scales and zeroPoint equal to 0.
Type: {@link ANEURALNETWORKS_TENSOR_INT32}
Shape: [numUnits]
* 14: The cell bias. Quantized with scale being the product of input and weights scales and zeroPoint equal to 0.
Type: {@link ANEURALNETWORKS_TENSOR_INT32}
Shape: [numUnits]
* 15: The output gate bias. Quantized with scale being the product of input and weights scales and zeroPoint equal to 0.
Type: {@link ANEURALNETWORKS_TENSOR_INT32}
Shape: [numUnits]
* 16: The projection weights. Optional.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
Shape: [outputSize, numUnits]
* 17: The projection bias. Quantized with scale being the product of input and weights scales and zeroPoint equal to 0.
Optional.
Type: {@link ANEURALNETWORKS_TENSOR_INT32}
Shape: [outputSize]
* 18: The output from the previous time step.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}
Shape: [batchSize, outputSize]
* 19: The cell state from the previous time step.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
Shape: [batchSize, numUnits]
* 20: The input layer normalization weights. Used to rescale normalized inputs to activation at input gate. Optional.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
Shape: [numUnits]
* 21: The forget layer normalization weights. Used to rescale normalized inputs to activation at forget gate. Optional.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
Shape: [numUnits]
* 22: The cell layer normalization weights. Used to rescale normalized inputs to activation at cell gate. Optional.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
Shape: [numUnits]
* 23: The output layer normalization weights. Used to rescale normalized inputs to activation at output gate. Optional.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
Shape: [numUnits]
* 24: The cell clip. If provided the cell state is clipped by this value prior to the cell output activation. Optional.
Type: {@link ANEURALNETWORKS_FLOAT32}.
* 25: The projection clip. If provided and projection is enabled,
this is used for clipping the projected values. Optional.
Type: {@link ANEURALNETWORKS_FLOAT32}.
* 26: The scale of the intermediate result of matmul,
i.e. input to layer normalization, at input gate.
Type: {@link ANEURALNETWORKS_FLOAT32}.
* 27: The scale of the intermediate result of matmul,
i.e. input to layer normalization, at forget gate.
Type: {@link ANEURALNETWORKS_FLOAT32}.
* 28: The scale of the intermediate result of matmul,
i.e. input to layer normalization, at cell gate.
Type: {@link ANEURALNETWORKS_FLOAT32}.
* 29: The scale of the intermediate result of matmul,
i.e. input to layer normalization, at output gate.
Type: {@link ANEURALNETWORKS_FLOAT32}.
* 30: The zero point of the hidden state, i.e. input to projection.
Type: {@link ANEURALNETWORKS_INT32}.
* 31: The scale of the hidden state, i.e. input to projection.
Type: {@link ANEURALNETWORKS_FLOAT32}.
Outputs:
* 0: The output state (out).
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}
Shape: [batchSize, outputSize]
* 1: The cell state (out).
Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
Shape: [batchSize, numUnits]
* 2: The output. This is effectively the same as the current
“output state (out)” value.
Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}
Shape: [batchSize, outputSize]
Available since API level 30.
### ANEURALNETWORKS_IF
Executes one of the two referenced models as determined by a boolean value.
The inputs and outputs of the two referenced models must agree with the signature of this operation. That is, if the operation has (3 + n) inputs and m outputs, both models must have n inputs and m outputs with the same types, ranks (if specified), dimensions (if specified), scales,
zeroPoints, and other operand parameters as the corresponding operation inputs and outputs.
Inputs:
* 0: A value of type {@link ANEURALNETWORKS_TENSOR_BOOL8} and shape [1]
that determines which of the two referenced models to execute.
The operand must have fully specified dimensions.
* 1: A {@link ANEURALNETWORKS_MODEL} reference to the model to be executed if the condition is true.
* 2: A {@link ANEURALNETWORKS_MODEL} reference to the model to be executed if the condition is false.
* 3 ~ (n + 2): Inputs to be passed to the model selected for execution.
Outputs:
* 0 ~ (m - 1): Outputs produced by the selected model.
Available since API level 30.
### ANEURALNETWORKS_WHILE
Executes the body model until the condition model outputs false.
The inputs to this operation are the condition model, the body model,
and operand values for the first iteration of the loop. The values are implicitly split into three groups of input-output, state-only, and input-only values, as described below.
The outputs of this operation are the final values of input-output operands.
Both the condition and body model receive (m + k + n) inputs.
* The first m (m >= 1) inputs are input-output operands. For the first iteration, these are initialized from the corresponding inputs of the WHILE operation. In subsequent iterations, their values come from the corresponding outputs of the body model produced during the previous iteration.
* The next k (k >= 0) inputs are state-only operands. They are similar to the input-output operands, except that their values are no longer available after the loop terminates.
* The last n (n >= 0) inputs are input-only operands. Their values come from the corresponding inputs of the WHILE operation.
The body model produces (m + k) outputs.
* The first m outputs are input-output operands. They become the outputs of the WHILE operation when a termination condition is reached.
* The last k outputs are state-only operands. Their values are no longer available after the loop terminates.
The numbers m, k, and n are inferred by the runtime as follows:
m = (WHILE operation output count)
k = (body model output count) - m n = (body model input count) - m - k
The pseudo-code below illustrates the flow of a WHILE operation with inputs condition, body, initial_input_output, initial_state, input_only
(m = 1, k = 1, n = 1):
```
input_output = initial_input_output state = initial_state while condition(input_output, state, input_only):
input_output, state = body(input_output, state, input_only)
return input_output
```
To prevent infinite loops, there is an implicit execution timeout associated with each loop (“loop timeout duration”). See {@link ANeuralNetworksExecution_setLoopTimeout}.
Inputs:
* 0: A {@link ANEURALNETWORKS_MODEL} reference to the condition model. The model must have (m + k + n) inputs with the same types, ranks (if specified), dimensions (if specified),
scales, zeroPoints, and other operand parameters as the corresponding inputs of the WHILE operation and exactly one output of {@link ANEURALNETWORKS_TENSOR_BOOL8} and shape [1].
The output operand must have fully specified dimensions.
* 1: A {@link ANEURALNETWORKS_MODEL} reference to the body model.
The model must have (m + k + n) inputs and (m + k) outputs with the same types, ranks (if specified), dimensions (if specified),
scales, zeroPoints, and other operand parameters as the corresponding inputs and outputs of the WHILE operation.
* (m inputs): Initial values for input-output operands.
* (k inputs): Initial values for state-only operands.
* (n inputs): Values for input-only operands.
Outputs:
* 0 ~ (m - 1): Outputs produced by the loop.
Available since API level 30.
### ANEURALNETWORKS_ELU
Computes exponential linear activation on the input tensor element-wise.
The output is calculated using the following formula:
```
ELU(x) = max(0, x) + min(0, alpha * (exp(x) - 1))
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
Supported tensor rank: from 1.
Inputs:
* 0: A tensor, specifying the input. May be zero-sized.
* 1: A scalar, specifying the alpha parameter.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16},
the alpha value must be of {@link ANEURALNETWORKS_FLOAT16}.
For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32},
the alpha value must be of {@link ANEURALNETWORKS_FLOAT32}.
Outputs:
* 0: The output tensor of same shape and type as input0.
Available since API level 30.
### ANEURALNETWORKS_HARD_SWISH
Computes hard-swish activation on the input tensor element-wise.
Hard swish activation is introduced in https://arxiv.org/pdf/1905.02244.pdf
The output is calculated using the following formula:
```
h-swish(x) = x * max(0, min(6, (x + 3))) / 6
```
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}
Supported tensor rank: from 1.
Inputs:
* 0: A tensor, specifying the input. May be zero-sized.
Outputs:
* 0: The output tensor of same shape and type as input0.
Scale and zero point of this tensor may be different from the input tensor’s parameters.
Available since API level 30.
### ANEURALNETWORKS_FILL
Creates a tensor filled with a scalar value.
Supported output tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
Supported tensor rank: from 1.
Inputs:
* 0: A 1-D tensor, specifying the desired output tensor shape.
* 1: A scalar, specifying the value to fill the output tensors with.
For output tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16},
the scalar must be of {@link ANEURALNETWORKS_FLOAT16}.
For output tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32},
the scalar must be of {@link ANEURALNETWORKS_FLOAT32}.
For output tensor of {@link ANEURALNETWORKS_TENSOR_INT32},
the scalar must be of {@link ANEURALNETWORKS_INT32}.
Outputs:
* 0: The output tensor.
Available since API level 30.
### ANEURALNETWORKS_RANK
Returns the rank of a tensor.
The rank of a tensor is the number of dimensions in it. Also known as
“order”, “degree”, “ndims”.
Supported tensor {@link OperandCode}:
* {@link ANEURALNETWORKS_TENSOR_FLOAT16}
* {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* {@link ANEURALNETWORKS_TENSOR_INT32}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}
* {@link ANEURALNETWORKS_TENSOR_BOOL8}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}
* {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM}
* {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}
Supported tensor rank: from 1.
Inputs:
* 0: The input tensor.
Outputs:
* 0: A scalar of {@link ANEURALNETWORKS_INT32}, specifying the rank of the input tensor.
Available since API level 30.
### ANEURALNETWORKS_BATCH_MATMUL
Performs multiplication of two tensors in batches.
### None
None
Trait Implementations
---
### impl Debug for OperationCode
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for OperationCode
### impl Send for OperationCode
### impl Sync for OperationCode
### impl Unpin for OperationCode
### impl UnwindSafe for OperationCode
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum nnapi_sys::ResultCode
===
```
#[repr(C)]pub enum ResultCode {
ANEURALNETWORKS_NO_ERROR,
ANEURALNETWORKS_OUT_OF_MEMORY,
ANEURALNETWORKS_INCOMPLETE,
ANEURALNETWORKS_UNEXPECTED_NULL,
ANEURALNETWORKS_BAD_DATA,
ANEURALNETWORKS_OP_FAILED,
ANEURALNETWORKS_BAD_STATE,
ANEURALNETWORKS_UNMAPPABLE,
ANEURALNETWORKS_OUTPUT_INSUFFICIENT_SIZE,
ANEURALNETWORKS_UNAVAILABLE_DEVICE,
ANEURALNETWORKS_MISSED_DEADLINE_TRANSIENT,
ANEURALNETWORKS_MISSED_DEADLINE_PERSISTENT,
ANEURALNETWORKS_RESOURCE_EXHAUSTED_TRANSIENT,
ANEURALNETWORKS_RESOURCE_EXHAUSTED_PERSISTENT,
ANEURALNETWORKS_DEAD_OBJECT,
INVALID_ERROR,
}
```
Result codes.
Any NNAPI function can return any result code, including result codes not currently documented. Any value other than {@link ANEURALNETWORKS_NO_ERROR}
indicates a failure of some kind.
Additional information about the nature of a failure can be obtained from the device log after enabling NNAPI debugging by setting the debug.nn.vlog property to 1, e.g., by calling "adb shell setprop debug.nn.vlog 1".
Available since API level 27.
Variants
---
### ANEURALNETWORKS_NO_ERROR
Operation was succesful.
### ANEURALNETWORKS_OUT_OF_MEMORY
Failure caused by not enough available memory.
### ANEURALNETWORKS_INCOMPLETE
Failure caused by not enough available memory.
### ANEURALNETWORKS_UNEXPECTED_NULL
Failure caused by unexpected null argument.
### ANEURALNETWORKS_BAD_DATA
Failure caused by invalid function arguments, invalid model definition,
invalid execution definition or invalid data at execution time.
### ANEURALNETWORKS_OP_FAILED
Failure caused by failed model execution.
### ANEURALNETWORKS_BAD_STATE
Failure caused by object being in the wrong state.
### ANEURALNETWORKS_UNMAPPABLE
Failure caused by not being able to map a file into memory.
This may be caused by a file descriptor not being mappable, or an AHardwareBuffer not supported by the device.
Mitigate by reading its content into memory.
### ANEURALNETWORKS_OUTPUT_INSUFFICIENT_SIZE
Failure caused by insufficient buffer size provided to a model output.
### ANEURALNETWORKS_UNAVAILABLE_DEVICE
Failure caused by a device not being available.
### ANEURALNETWORKS_MISSED_DEADLINE_TRANSIENT
Failure because a deadline could not be met for a task, but future deadlines may still be met for the same task after a short delay.
Available since API level 30.
### ANEURALNETWORKS_MISSED_DEADLINE_PERSISTENT
Failure because a deadline could not be met for a task, and future deadlines will likely also not be met for the same task even after a short delay.
Available since API level 30.
### ANEURALNETWORKS_RESOURCE_EXHAUSTED_TRANSIENT
Failure because of a resource limitation within the driver, but future calls for the same task may still succeed after a short delay.
Available since API level 30.
### ANEURALNETWORKS_RESOURCE_EXHAUSTED_PERSISTENT
Failure because of a resource limitation within the driver, and future calls for the same task will likely also fail even after a short delay.
Available since API level 30.
### ANEURALNETWORKS_DEAD_OBJECT
Failure indicating an object is in a dead state.
Available since API level 30.
### INVALID_ERROR
This error code is not defined in NNAPI.
Implementations
---
### impl ResultCode
#### pub fn as_str(&self) -> &'static str
Trait Implementations
---
### impl Debug for ResultCode
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(value: i32) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl RefUnwindSafe for ResultCode
### impl Send for ResultCode
### impl Sync for ResultCode
### impl Unpin for ResultCode
### impl UnwindSafe for ResultCode
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Constant nnapi_sys::ANEURALNETWORKS_DEVICE_ACCELERATOR
===
```
pub const ANEURALNETWORKS_DEVICE_ACCELERATOR: DeviceTypeCode = 4;
```
Dedicated accelerator for Machine Learning workloads.
Constant nnapi_sys::ANEURALNETWORKS_DEVICE_CPU
===
```
pub const ANEURALNETWORKS_DEVICE_CPU: DeviceTypeCode = 2;
```
The device runs NNAPI models on single or multi-core CPU.
Constant nnapi_sys::ANEURALNETWORKS_DEVICE_GPU
===
```
pub const ANEURALNETWORKS_DEVICE_GPU: DeviceTypeCode = 3;
```
The device can run NNAPI models and also accelerate graphics APIs such as OpenGL ES and Vulkan.
Constant nnapi_sys::ANEURALNETWORKS_DEVICE_OTHER
===
```
pub const ANEURALNETWORKS_DEVICE_OTHER: DeviceTypeCode = 1;
```
The device does not fall into any category below.
Constant nnapi_sys::ANEURALNETWORKS_DEVICE_UNKNOWN
===
```
pub const ANEURALNETWORKS_DEVICE_UNKNOWN: DeviceTypeCode = 0;
```
The device type cannot be provided.
Constant nnapi_sys::ANEURALNETWORKS_FUSED_NONE
===
```
pub const ANEURALNETWORKS_FUSED_NONE: FuseCode = 0;
```
NO fused activation function.
Constant nnapi_sys::ANEURALNETWORKS_FUSED_RELU
===
```
pub const ANEURALNETWORKS_FUSED_RELU: FuseCode = 1;
```
Fused ReLU activation function.
Constant nnapi_sys::ANEURALNETWORKS_FUSED_RELU1
===
```
pub const ANEURALNETWORKS_FUSED_RELU1: FuseCode = 2;
```
Fused ReLU1 activation function.
Constant nnapi_sys::ANEURALNETWORKS_FUSED_RELU6
===
```
pub const ANEURALNETWORKS_FUSED_RELU6: FuseCode = 3;
```
Fused ReLU6 activation function.
Constant nnapi_sys::ANEURALNETWORKS_PADDING_SAME
===
```
pub const ANEURALNETWORKS_PADDING_SAME: PaddingCode = 1;
```
SAME padding.
Padding on both ends are the “same”:
padding_to_beginning = total_padding / 2 padding_to_end = (total_padding + 1)/2.
i.e., for even number of padding, padding to both ends are exactly the same; for odd number of padding, padding to the ending is bigger than the padding to the beginning by 1.
total_padding is a function of input, stride, dilation and filter size.
It could be computed as follows:
out_size = (input + stride - 1) / stride effective_filter_size = (filter_size - 1) * dilation + 1 needed_input = (out_size - 1) * stride + effective_filter_size total_padding = max(0, needed_input - input_size)
The computation is the same for the horizontal and vertical directions.
Constant nnapi_sys::ANEURALNETWORKS_PADDING_VALID
===
```
pub const ANEURALNETWORKS_PADDING_VALID: PaddingCode = 2;
```
VALID padding.
No padding. When the input size is not evenly divisible by the filter size, the input at the end that could not fill the whole filter tile will simply be ignored.
Constant nnapi_sys::ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER
===
```
pub const ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER: PreferenceCode = 1;
```
Prefer returning a single answer as fast as possible, even if this causes more power consumption.
Constant nnapi_sys::ANEURALNETWORKS_PREFER_LOW_POWER
===
```
pub const ANEURALNETWORKS_PREFER_LOW_POWER: PreferenceCode = 0;
```
Prefer executing in a way that minimizes battery drain.
This is desirable for compilations that will be executed often.
Constant nnapi_sys::ANEURALNETWORKS_PREFER_SUSTAINED_SPEED
===
```
pub const ANEURALNETWORKS_PREFER_SUSTAINED_SPEED: PreferenceCode = 2;
```
Prefer maximizing the throughput of successive frames, for example when processing successive frames coming from the camera.
Function nnapi_sys::ANeuralNetworksBurst_create
===
```
pub unsafe extern "C" fn ANeuralNetworksBurst_create(
compilation: *mutANeuralNetworksCompilation,
burst: *mut*mutANeuralNetworksBurst
) -> c_int
```
Create a {@link ANeuralNetworksBurst} to apply the given compilation.
This only creates the burst object. Computation is only performed once
{@link ANeuralNetworksExecution_burstCompute} is invoked with a valid
{@link ANeuralNetworksExecution} and {@link ANeuralNetworksBurst}.
The provided compilation must outlive the burst object.
Available since API level 29.
@param compilation The {@link ANeuralNetworksCompilation} to be evaluated.
@param burst The newly created object or NULL if unsuccessful.
@return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the compilation is invalid.
Function nnapi_sys::ANeuralNetworksBurst_free
===
```
pub unsafe extern "C" fn ANeuralNetworksBurst_free(
burst: *mutANeuralNetworksBurst
)
```
Destroys the burst object.
Available since API level 29.
@param burst The burst object to be destroyed. Passing NULL is acceptable and results in no operation.
Function nnapi_sys::ANeuralNetworksCompilation_create
===
```
pub unsafe extern "C" fn ANeuralNetworksCompilation_create(
model: *mutANeuralNetworksModel,
compilation: *mut*mutANeuralNetworksCompilation
) -> c_int
```
Create a {@link ANeuralNetworksCompilation} to compile the given model.
The model passed to this function is termed the “main model” of the compilation, to distinguish it from other models referred to by an Operand of type {@link ANEURALNETWORKS_MODEL} within this compilation.
This function only creates the object. Compilation is only performed once
{@link ANeuralNetworksCompilation_finish} is invoked.
{@link ANeuralNetworksCompilation_finish} should be called once all desired properties have been set on the compilation.
{@link ANeuralNetworksModel_free} should be called once the compilation is no longer needed.
The provided model must outlive the compilation.
The model must already have been finished by a call to
{@link ANeuralNetworksModel_finish}.
See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
Available since API level 27.
@param model The {@link ANeuralNetworksModel} to be compiled.
@param compilation The newly created object or NULL if unsuccessful.
@return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the model is invalid.
Function nnapi_sys::ANeuralNetworksCompilation_createForDevices
===
```
pub unsafe extern "C" fn ANeuralNetworksCompilation_createForDevices(
model: *mutANeuralNetworksModel,
devices: *const*constANeuralNetworksDevice,
numDevices: u32,
compilation: *mut*mutANeuralNetworksCompilation
) -> c_int
```
Create a {@link ANeuralNetworksCompilation} to compile the given model for a specified set of devices. If more than one device is specified, the compilation will distribute the workload automatically across the devices. The model must be fully supported by the specified set of devices. This means that ANeuralNetworksModel_getSupportedOperationsForDevices() must have returned true for every operation for that model/devices pair.
The user must handle all compilation and execution failures from the specified set of devices. This is in contrast to a use of {@link ANeuralNetworksCompilation_create}, where the runtime will attempt to recover from such failures.
The model passed to this function is termed the “main model” of the compilation, to distinguish it from other models referred to by an Operand of type {@link ANEURALNETWORKS_MODEL} within this compilation.
@param model The {@link ANeuralNetworksModel} to be compiled.
@param devices The set of devices. Must not contain duplicates.
@param numDevices The number of devices in the set.
@param compilation The newly created object or NULL if unsuccessful.
@return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the model is invalid.
Available since API level 29.
Function nnapi_sys::ANeuralNetworksCompilation_finish
===
```
pub unsafe extern "C" fn ANeuralNetworksCompilation_finish(
compilation: *mutANeuralNetworksCompilation
) -> c_int
```
Indicate that we have finished modifying a compilation. Required before calling {@link ANeuralNetworksBurst_create} or
{@link ANeuralNetworksExecution_create}.
An application must ensure that no other thread uses the compilation at the same time.
This function must only be called once for a given compilation.
If {@link ANeuralNetworksCompilation_setTimeout} was called on this compilation, and the compilation is not able to be finished before the timeout duration is exceeded, then compilation may be aborted, in which case
{@link ANEURALNETWORKS_MISSED_DEADLINE_*} will be returned.
See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
Available since API level 27.
@param compilation The compilation to be finished.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksCompilation_free
===
```
pub unsafe extern "C" fn ANeuralNetworksCompilation_free(
compilation: *mutANeuralNetworksCompilation
)
```
Destroy a compilation.
The compilation need not have been finished by a call to
{@link ANeuralNetworksCompilation_finish}.
See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
Available since API level 27.
@param compilation The compilation to be destroyed. Passing NULL is acceptable and results in no operation.
Function nnapi_sys::ANeuralNetworksCompilation_setCaching
===
```
pub unsafe extern "C" fn ANeuralNetworksCompilation_setCaching(
compilation: *mutANeuralNetworksCompilation,
cacheDir: *constc_char,
token: *constu8
) -> c_int
```
Sets the compilation caching signature and the cache directory.
Provides optional caching information to the runtime for faster repeated compilation.
See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
@param compilation The compilation to be modified.
@param cacheDir The cache directory for the runtime to store and retrieve caching data. It is recommended to use the code cache directory provided by the Android runtime. If not using the code cache directory, the user should choose a directory local to the application, and is responsible for managing the cache entries.
@param token The token provided by the user to specify a model must be of length ANEURALNETWORKS_BYTE_SIZE_OF_CACHE_TOKEN. The user should ensure that the token is unique to a model within the application. The NNAPI runtime cannot detect token collisions; a collision will result in a failed execution or in a successful execution that produces incorrect output values.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 29.
Function nnapi_sys::ANeuralNetworksCompilation_setPreference
===
```
pub unsafe extern "C" fn ANeuralNetworksCompilation_setPreference(
compilation: *mutANeuralNetworksCompilation,
preference: i32
) -> c_int
```
Sets the execution preference.
Provides guidance to the runtime when trade-offs are possible. By default the runtime uses PREFER_SINGLE_FAST_ANSWER
See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
Available since API level 27.
@param compilation The compilation to be modified.
@param preference Either {@link PREFER_LOW_POWER},
{@link PREFER_SINGLE_FAST_ANSWER}, or
{@link PREFER_SUSTAINED_SPEED}.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksCompilation_setPriority
===
```
pub unsafe extern "C" fn ANeuralNetworksCompilation_setPriority(
compilation: *mutANeuralNetworksCompilation,
priority: c_int
) -> c_int
```
Set the execution priority.
Execution priorities are relative to other executions created by the same application (specifically same uid) for the same device. Specifically,
priorities of executions from one application will not affect executions from another application. Similarly, priorities of executions on one device will not affect executions on another device.
Higher priority executions may use more compute resources than lower priority executions, and may preempt or starve lower priority executions.
See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
Available since API level 30.
@param compilation The compilation to be modified.
@param priority The relative priority of the execution compared to other executions created by the application. Must be one of ANEURALNETWORKS_PRIORITY_*.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksCompilation_setTimeout
===
```
pub unsafe extern "C" fn ANeuralNetworksCompilation_setTimeout(
compilation: *mutANeuralNetworksCompilation,
duration: u64
) -> c_int
```
Set the maximum expected duration for compiling the model.
If the device is not able to complete the compilation within the specified duration, the compilation may be aborted. The timeout duration begins at the call to {@link ANeuralNetworksCompilation_finish}.
This timeout duration acts as a hint to drivers, and can be used to both free up compute resources within the driver and return control back to the application quicker than is possible without the hint. It enables drivers that are able to estimate how long a compilation will take to abort the compilation before it has even started if the driver believes the compilation cannot be completed within the timeout duration. Similarly, it enables drivers to abort an ongoing compilation if it is taking too long. However,
this call does not guarantee that the compilation will complete or abort within the timeout duration.
By default (i.e., unless ANeuralNetworksCompilation_setTimeout is called),
the timeout duration for compiling the model is considered infinite.
The {@link ANeuralNetworksCompilation} must have been created with
{@link ANeuralNetworksCompilation_createForDevices} with numDevices = 1,
otherwise this function will fail with ANEURALNETWORKS_BAD_DATA. If the device has a feature level reported by
{@link ANeuralNetworksDevice_getFeatureLevel} that is lower than 30, then the timeout duration hint will be ignored.
See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
@param compilation The compilation to be modified.
@param duration The maximum amount of time in nanoseconds that is expected to be spent finishing a compilation. If this duration is exceeded, the compilation may be aborted. If set to 0, the timeout duration is considered infinite.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 30.
Function nnapi_sys::ANeuralNetworksDevice_getFeatureLevel
===
```
pub unsafe extern "C" fn ANeuralNetworksDevice_getFeatureLevel(
device: *constANeuralNetworksDevice,
featureLevel: *muti64
) -> c_int
```
Get the supported NNAPI version of the specified device.
Each device has a supported feature level, which is the most advanced feature this driver implements. For example, if the driver implements the features introduced in Android P,
but does not implement the features introduced after Android P, the value would be 28.
Developers could decide whether or not the specified device should be used for a Model that has certain feature requirements.
@param device The representation of the specified device.
@param featureLevel The API level of the most advanced feature this driver implements.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 29.
Function nnapi_sys::ANeuralNetworksDevice_getName
===
```
pub unsafe extern "C" fn ANeuralNetworksDevice_getName(
device: *constANeuralNetworksDevice,
name: *mut*constc_char
) -> c_int
```
Get the name of the specified device.
@param device The representation of the specified device.
@param name The returned name of the specified device. The name will be in UTF-8 and will be null-terminated. It will be recognizable as a known device name rather than a cryptic string. For devices with feature level reported by
{@link ANeuralNetworksDevice_getFeatureLevel} that is 29 and above, the format of the name is {VENDOR}-{DEVICE}. For devices with feature level 28 or lower, the format of the name is undefined.
The name will remain valid for the duration of the application.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 29.
Function nnapi_sys::ANeuralNetworksDevice_getType
===
```
pub unsafe extern "C" fn ANeuralNetworksDevice_getType(
device: *constANeuralNetworksDevice,
type_: *muti32
) -> c_int
```
Get the type of a given device.
The device type can be used to help application developers to distribute Machine Learning workloads and other workloads such as graphical rendering.
E.g., for an app which renders AR scenes based on real time object detection results,
the developer could choose an ACCELERATOR type device for ML workloads, and reserve GPU for graphical rendering.
@param device The representation of the specified device.
@param type The returned {@link DeviceTypeCode} of the specified device.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 29.
Function nnapi_sys::ANeuralNetworksDevice_getVersion
===
```
pub unsafe extern "C" fn ANeuralNetworksDevice_getVersion(
device: *constANeuralNetworksDevice,
version: *mut*constc_char
) -> c_int
```
Get the version of the driver implementation of the specified device.
It’s the responsibility of the driver implementor to insure that this version string uniquely distinguishes this implementation from all previous implementations.
This version string must not be confused with the feature level which is solely defined by {@link ANeuralNetworksDevice_getFeatureLevel}. There is no implicit ordering of the versions.
For example, it is not possible to filter all drivers older than a certain version.
Application developers may use this version string to avoid or prefer specific driver implementations. For example, an application may want to do so because:
- A specific version of the driver does not provide the required performance,
perhaps because of a performance regression.
- A specific version of the driver has a bug or returns results that don’t match the minimum precision requirement for the application.
@param device The representation of the specified device.
@param version The returned version string of the driver for the specified device. The string will be in UTF-8 and will be null-terminated. For devices with feature level 28 or lower, “UNKNOWN” will be returned. The version string will remain valid for the duration of the application.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 29.
Function nnapi_sys::ANeuralNetworksDevice_wait
===
```
pub unsafe extern "C" fn ANeuralNetworksDevice_wait(
device: *constANeuralNetworksDevice
) -> c_int
```
Wait until the device is in a live state.
A device may encounter internal errors and temporarily enter a dead state. A call that uses a device in such a state will return with the error
{@link ANEURALNETWORKS_DEAD_OBJECT}. ANeuralNetworksDevice_wait will block until the device is in a live state.
@param device The representation of the specified device.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 30.
Function nnapi_sys::ANeuralNetworksEvent_createFromSyncFenceFd
===
```
pub unsafe extern "C" fn ANeuralNetworksEvent_createFromSyncFenceFd(
sync_fence_fd: c_int,
event: *mut*mutANeuralNetworksEvent
) -> c_int
```
Create a {@link ANeuralNetworksEvent} from a sync_fence file descriptor.
The newly created ANeuralNetworksEvent does not take ownership of the provided sync_fence_fd,
it will instead dup the provided sync_fence_fd and own the duplicate.
@param sync_fence_fd The sync_fence file descriptor.
@param event The newly created object or NULL if unsuccessful.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 30.
Function nnapi_sys::ANeuralNetworksEvent_free
===
```
pub unsafe extern "C" fn ANeuralNetworksEvent_free(
event: *mutANeuralNetworksEvent
)
```
Destroys the event.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
Available since API level 27.
@param event The event object to be destroyed. Passing NULL is acceptable and results in no operation.
Function nnapi_sys::ANeuralNetworksEvent_getSyncFenceFd
===
```
pub unsafe extern "C" fn ANeuralNetworksEvent_getSyncFenceFd(
event: *constANeuralNetworksEvent,
sync_fence_fd: *mutc_int
) -> c_int
```
Get sync_fence file descriptor from the event.
If the ANeuralNetworksEvent is not backed by a sync fence, the sync_fence_fd will be set to -1, and ANEURALNETWORKS_BAD_DATA will be returned.
See {@link ANeuralNetworksEvent_createFromSyncFenceFd} and
{@link ANeuralNetworksExecution_startComputeWithDependencies} to see how to create an event backed by a sync fence.
The user takes ownership of the returned fd, and must close the returned file descriptor when it is no longer needed.
@param event An event that is backed by a sync fence.
@param sync_fence_fd The sync_fence file descriptor. The file descriptor will be set to -1 if there is an error.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 30.
Function nnapi_sys::ANeuralNetworksEvent_wait
===
```
pub unsafe extern "C" fn ANeuralNetworksEvent_wait(
event: *mutANeuralNetworksEvent
) -> c_int
```
Waits until the execution completes.
More than one thread can wait on an event. When the execution completes,
all threads will be released.
If {@link ANeuralNetworksExecution_setTimeout} was called on the execution corresponding to this event, and the execution is not able to complete before the duration is exceeded, the execution may be aborted, in which case
{@link ANEURALNETWORKS_MISSED_DEADLINE_*} will be returned here.
If the execution contains a {@link ANEURALNETWORKS_WHILE} operation, and the condition model does not output false within the loop timeout duration,
the execution will be aborted, and {@link ANEURALNETWORKS_MISSED_DEADLINE_*}
will be returned here.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
Available since API level 27.
@param event The event that will be signaled on completion.
@return ANEURALNETWORKS_NO_ERROR if the execution completed normally.
ANEURALNETWORKS_UNMAPPABLE if the execution input or output memory cannot be properly mapped.
Function nnapi_sys::ANeuralNetworksExecution_burstCompute
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_burstCompute(
execution: *mutANeuralNetworksExecution,
burst: *mutANeuralNetworksBurst
) -> c_int
```
Schedule synchronous evaluation of the execution on a burst object.
Schedules synchronous evaluation of the execution. Returns once the execution has completed and the outputs are ready to be consumed.
If {@link ANeuralNetworksExecution_setTimeout} was called on the execution,
and the execution is not able to complete before the timeout duration is exceeded, then execution may be aborted, in which case
{@link ANEURALNETWORKS_MISSED_DEADLINE_*} will be returned.
If the execution contains a {@link ANEURALNETWORKS_WHILE} operation, and the condition model does not output false within the loop timeout duration,
then execution will be aborted and {@link ANEURALNETWORKS_MISSED_DEADLINE_*}
will be returned. If the device has a feature level reported by
{@link ANeuralNetworksDevice_getFeatureLevel} that is lower than 30, then the timeout duration hint will be ignored.
There must be at most one {@link ANeuralNetworksExecution} processing at any given time for any given burst object. Any
{@link ANeuralNetworksExecution} launched before the previous has finished will result in ANEURALNETWORKS_BAD_STATE.
See {@link ANeuralNetworksExecution_compute} for synchronous execution.
See {@link ANeuralNetworksExecution_startCompute} for regular asynchronous execution.
See {@link ANeuralNetworksExecution_startComputeWithDependencies} for asynchronous execution with dependencies.
Available since API level 29.
@param burst The burst object to execute on.
@param execution The execution to be scheduled and executed. The execution must be created from the same {@link ANeuralNetworksCompilation} as the burst object.
@return ANEURALNETWORKS_NO_ERROR if the execution completed normally.
Function nnapi_sys::ANeuralNetworksExecution_compute
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_compute(
execution: *mutANeuralNetworksExecution
) -> c_int
```
Schedule synchronous evaluation of the execution.
Schedules synchronous evaluation of the execution. Returns once the execution has completed and the outputs are ready to be consumed.
If {@link ANeuralNetworksExecution_setTimeout} was called on this execution,
and the execution is not able to complete before the timeout duration is exceeded, then execution may be aborted, in which case
{@link ANEURALNETWORKS_MISSED_DEADLINE_*} will be returned. If the device has a feature level reported by {@link ANeuralNetworksDevice_getFeatureLevel}
that is lower than 30, then the timeout duration hint will be ignored.
If this execution contains a {@link ANEURALNETWORKS_WHILE} operation, and the condition model does not output false within the loop timeout duration,
then execution will be aborted and {@link ANEURALNETWORKS_MISSED_DEADLINE_*}
will be returned.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
See {@link ANeuralNetworksExecution_burstCompute} for burst synchronous execution.
See {@link ANeuralNetworksExecution_startCompute} for regular asynchronous execution.
See {@link ANeuralNetworksExecution_startComputeWithDependencies} for asynchronous execution with dependencies.
Available since API level 29.
@param execution The execution to be scheduled and executed.
@return ANEURALNETWORKS_NO_ERROR if the execution completed normally.
ANEURALNETWORKS_UNMAPPABLE if the execution input or output memory cannot be properly mapped.
Function nnapi_sys::ANeuralNetworksExecution_create
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_create(
compilation: *mutANeuralNetworksCompilation,
execution: *mut*mutANeuralNetworksExecution
) -> c_int
```
Create a {@link ANeuralNetworksExecution} to apply the given compilation.
This only creates the object. Computation is only performed once
{@link ANeuralNetworksExecution_burstCompute},
{@link ANeuralNetworksExecution_compute},
{@link ANeuralNetworksExecution_startCompute} or
{@link ANeuralNetworksExecution_startComputeWithDependencies} is invoked.
The provided compilation must outlive the execution.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
Available since API level 27.
@param compilation The {@link ANeuralNetworksCompilation} to be evaluated.
@param execution The newly created object or NULL if unsuccessful.
@return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the compilation is invalid.
Function nnapi_sys::ANeuralNetworksExecution_free
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_free(
execution: *mutANeuralNetworksExecution
)
```
Destroy an execution.
The execution need not have been scheduled by a call to
{@link ANeuralNetworksExecution_burstCompute},
{@link ANeuralNetworksExecution_compute},
{@link ANeuralNetworksExecution_startCompute} or
{@link ANeuralNetworksExecution_startComputeWithDependencies}; but if it has been scheduled,
then the application must not call {@link ANeuralNetworksExecution_free}
until the execution has completed (i.e.,
{@link ANeuralNetworksExecution_burstCompute},
{@link ANeuralNetworksExecution_compute}, or
{@link ANeuralNetworksEvent_wait} has returned).
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
Available since API level 27.
@param execution The execution to be destroyed. Passing NULL is acceptable and results in no operation.
Function nnapi_sys::ANeuralNetworksExecution_getDuration
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_getDuration(
execution: *constANeuralNetworksExecution,
durationCode: i32,
duration: *mutu64
) -> c_int
```
Get the time spent in the specified {@link ANeuralNetworksExecution}, in nanoseconds.
The execution must have completed. On asynchronous execution initiated by
{@link ANeuralNetworksExecution_startCompute} or
{@link ANeuralNetworksExecution_startComputeWithDependencies},
{@link ANeuralNetworksEvent_wait} must be called prior to this function.
@param execution The execution to be queried.
@param durationCode The measurement to be queried, specified by {@link DurationCode}.
@param duration The returned duration. If no measurement was requested by
{@link ANeuralNetworksExecution_setMeasureTiming}, if the device is has a feature level reported by
{@link ANeuralNetworksDevice_getFeatureLevel} that is lower than 29, or for some other reason the duration is not available, UINT64_MAX will be returned. A particular device need not support any given measurement.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 29.
Function nnapi_sys::ANeuralNetworksExecution_getOutputOperandDimensions
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_getOutputOperandDimensions(
execution: *mutANeuralNetworksExecution,
index: i32,
dimensions: *mutu32
) -> c_int
```
Get the dimensional information of the specified output operand of the model of the
{@link ANeuralNetworksExecution}. The target output operand cannot be a scalar.
The execution must have completed. On asynchronous execution initiated by
{@link ANeuralNetworksExecution_startCompute} or
{@link ANeuralNetworksExecution_startComputeWithDependencies},
{@link ANeuralNetworksEvent_wait} must be called prior to this function.
@param execution The execution to be queried.
@param index The index of the output argument we are querying. It is an index into the lists passed to {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not the index associated with {@link ANeuralNetworksModel_addOperand}.
@param dimensions The dimension array to be filled. The size of the array must be exactly as large as the rank of the output operand to be queried in the model.
@return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_OUTPUT_INSUFFICIENT_SIZE if the target output is provided an insufficient buffer at execution time,
ANEURALNETWORKS_BAD_DATA if the index is invalid or if the target is a scalar.
Available since API level 29.
Function nnapi_sys::ANeuralNetworksExecution_getOutputOperandRank
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_getOutputOperandRank(
execution: *mutANeuralNetworksExecution,
index: i32,
rank: *mutu32
) -> c_int
```
Get the dimensional information of the specified output operand of the model of the
{@link ANeuralNetworksExecution}.
The execution must have completed. On asynchronous execution initiated by
{@link ANeuralNetworksExecution_startCompute} or
{@link ANeuralNetworksExecution_startComputeWithDependencies},
{@link ANeuralNetworksEvent_wait} must be called prior to this function.
@param execution The execution to be queried.
@param index The index of the output argument we are querying. It is an index into the lists passed to
{@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not the index associated with {@link ANeuralNetworksModel_addOperand}.
@param rank The rank of the output operand.
@return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_OUTPUT_INSUFFICIENT_SIZE if the target output is provided an insufficient buffer at execution time,
ANEURALNETWORKS_BAD_DATA if the index is invalid.
Available since API level 29.
Function nnapi_sys::ANeuralNetworksExecution_setInput
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_setInput(
execution: *mutANeuralNetworksExecution,
index: i32,
type_: *constANeuralNetworksOperandType,
buffer: *constc_void,
length: usize
) -> c_int
```
Associate a user buffer with an input of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the buffer until the execution has completed. Evaluation of the execution will not change the content of the buffer.
The provided buffer must outlive the execution.
If the input is optional, you can indicate that it is omitted by passing nullptr for buffer and 0 for length.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
Available since API level 27.
@param execution The execution to be modified.
@param index The index of the input argument we are setting. It is an index into the lists passed to
{@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not the index associated with
{@link ANeuralNetworksModel_addOperand}.
@param type The {@link ANeuralNetworksOperandType} of the operand. Unless the input is omitted, this should be used to specify the dimensions that were left unspecified when the operand was added to the model. All other properties of the type must be the same as specified in the model. If the type is the same as specified when the model was built, NULL can be passed. Neither the {@link ANeuralNetworksOperandType}
nor the dimensions it points to need to outlive the call to {@link ANeuralNetworksExecution_setInput}.
@param buffer The buffer containing the data.
@param length The length in bytes of the buffer.
@return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the name is not recognized or the buffer is too small for the input.
Function nnapi_sys::ANeuralNetworksExecution_setInputFromMemory
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_setInputFromMemory(
execution: *mutANeuralNetworksExecution,
index: i32,
type_: *constANeuralNetworksOperandType,
memory: *constANeuralNetworksMemory,
offset: usize,
length: usize
) -> c_int
```
Associate a region of a memory object with an input of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the region until the execution has completed. Evaluation of the execution will not change the content of the region.
The provided memory must outlive the execution.
If the input is optional, you can indicate that it is omitted by using {@link ANeuralNetworksExecution_setInput} instead, passing nullptr for buffer and 0 for length.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
See {@link ANeuralNetworksMemory_createFromAHardwareBuffer} for information on AHardwareBuffer usage.
See {@link ANeuralNetworksMemory_createFromDesc} for information on usage of memory objects created from memory descriptors.
Available since API level 27.
@param execution The execution to be modified.
@param index The index of the input argument we are setting. It is an index into the lists passed to
{@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not the index associated with {@link ANeuralNetworksModel_addOperand}.
@param type The {@link ANeuralNetworksOperandType} of the operand. This should be used to specify the dimensions that were left unspecified when the operand was added to the model. All other properties of the type must be the same as specified in the model. If the type is the same as specified when the model was built, NULL can be passed. Neither the {@link ANeuralNetworksOperandType}
nor the dimensions it points to need to outlive the call to {@link ANeuralNetworksExecution_setInputFromMemory}.
@param memory The memory containing the data.
@param offset This specifies the location of the data within the memory.
The offset is in bytes from the start of memory.
@param length The size in bytes of the data value.
@return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the name is not recognized or the buffer is too small for the input.
Function nnapi_sys::ANeuralNetworksExecution_setLoopTimeout
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_setLoopTimeout(
execution: *mutANeuralNetworksExecution,
duration: u64
) -> c_int
```
Set the maximum duration of WHILE loops in the specified execution.
This is a fuzzy per-loop timeout intended to prevent infinite loops.
If a WHILE loop condition model does not output false within the specified duration, the execution will be aborted.
See {@link ANeuralNetworks_getDefaultLoopTimeout} and
{@link ANeuralNetworks_getMaximumLoopTimeout} for the default and maximum timeout values.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
@param execution The execution to be modified.
@param duration The maximum amount of time in nanoseconds that can be spent executing a WHILE loop. If the specified duration value exceeds the value produced by {@link ANeuralNetworks_getMaximumLoopTimeout}, it will be overridden by that value.
@return ANEURALNETWORKS_NO_ERROR if successful.
ANEURALNETWORKS_BAD_STATE if execution has started.
ANEURALNETWORKS_UNEXPECTED_NULL if execution is NULL.
Available since API level 30.
Function nnapi_sys::ANeuralNetworksExecution_setMeasureTiming
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_setMeasureTiming(
execution: *mutANeuralNetworksExecution,
measure: bool
) -> c_int
```
Specifies whether duration of the {@link ANeuralNetworksExecution} is to be measured. Evaluation of the execution must not have been scheduled.
By default, duration is not measured.
The {@link ANeuralNetworksExecution} must have been created from an
{@link ANeuralNetworksCompilation} which in turn was created from
{@link ANeuralNetworksCompilation_createForDevices} with numDevices = 1.
If the device has a feature level reported by
{@link ANeuralNetworksDevice_getFeatureLevel} that is lower than 29, then the duration will not be measured.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
Available since API level 29.
@param execution The execution to be modified.
@param measure ‘true’ if duration is to be measured, ‘false’ if not.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksExecution_setOutput
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_setOutput(
execution: *mutANeuralNetworksExecution,
index: i32,
type_: *constANeuralNetworksOperandType,
buffer: *mutc_void,
length: usize
) -> c_int
```
Associate a user buffer with an output of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the buffer until the execution has completed.
If the output is optional, you can indicate that it is omitted by passing nullptr for buffer and 0 for length.
The provided buffer must outlive the execution.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
Available since API level 27.
@param execution The execution to be modified.
@param index The index of the output argument we are setting. It is an index into the lists passed to
{@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not the index associated with {@link ANeuralNetworksModel_addOperand}.
@param type The {@link ANeuralNetworksOperandType} of the operand. Unless the output is omitted, this should be used to specify the dimensions that were left unspecified when the operand was added to the model. All other properties of the type must be the same as specified in the model. If the type is the same as specified when the model was built, NULL can be passed. Neither the {@link ANeuralNetworksOperandType}
nor the dimensions it points to need to outlive the call to {@link ANeuralNetworksExecution_setOutput}.
Since API level 29, the output operand can have unspecified dimensions or rank to be deduced dynamically during the execution.
However, the user must provide a large enough buffer. The user can retrieve the output dimensional information after the execution by {@link ANeuralNetworksExecution_getOutputOperandRank} and
{@link ANeuralNetworksExecution_getOutputOperandDimensions}.
@param buffer The buffer where the data is to be written.
@param length The length in bytes of the buffer.
@return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the name is not recognized or the buffer is too small for the output.
Function nnapi_sys::ANeuralNetworksExecution_setOutputFromMemory
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_setOutputFromMemory(
execution: *mutANeuralNetworksExecution,
index: i32,
type_: *constANeuralNetworksOperandType,
memory: *constANeuralNetworksMemory,
offset: usize,
length: usize
) -> c_int
```
Associate a region of a memory object with an output of the model of the
{@link ANeuralNetworksExecution}. Evaluation of the execution must not have been scheduled. Once evaluation of the execution has been scheduled, the application must not change the content of the region until the execution has completed.
If the output is optional, you can indicate that it is omitted by using {@link ANeuralNetworksExecution_setOutput} instead, passing nullptr for buffer and 0 for length.
The provided memory must outlive the execution.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
See {@link ANeuralNetworksMemory_createFromAHardwareBuffer} for information on AHardwareBuffer usage.
See {@link ANeuralNetworksMemory_createFromDesc} for information on usage of memory objects created from memory descriptors.
Available since API level 27.
@param execution The execution to be modified.
@param index The index of the output argument we are setting. It is an index into the lists passed to
{@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not the index associated with {@link ANeuralNetworksModel_addOperand}.
@param type The {@link ANeuralNetworksOperandType} of the operand. This should be used to specify the dimensions that were left unspecified when the operand was added to the model. All other properties of the type must be the same as specified in the model. If the type is the same as specified when the model was built, NULL can be passed. Neither the {@link ANeuralNetworksOperandType}
nor the dimensions it points to need to outlive the call to {@link ANeuralNetworksExecution_setOutputFromMemory}.
Since API level 29, the output operand can have unspecified dimensions or rank to be deduced dynamically during the execution.
However, the user must provide a large enough memory. The user can retrieve the output dimensional information after the execution by {@link ANeuralNetworksExecution_getOutputOperandRank} and
{@link ANeuralNetworksExecution_getOutputOperandDimensions}.
@param memory The memory where the data is to be stored.
@param offset This specifies the location of the data within the memory.
The offset is in bytes from the start of memory.
@param length The length in bytes of the data value.
@return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the name is not recognized or the buffer is too small for the output.
Function nnapi_sys::ANeuralNetworksExecution_setTimeout
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_setTimeout(
execution: *mutANeuralNetworksExecution,
duration: u64
) -> c_int
```
Set the maximum expected duration of the specified execution.
If the device is not able to complete the execution within the specified duration, the execution may be aborted. The timeout duration begins at a call to one of:
* {@link ANeuralNetworksExecution_burstCompute}
* {@link ANeuralNetworksExecution_compute}
* {@link ANeuralNetworksExecution_startCompute}
* {@link ANeuralNetworksExecution_startComputeWithDependencies}
This timeout duration acts as a hint to drivers, and can be used to both free up compute resources within the driver and return control back to the application quicker than is possible without the hint. It enables drivers that are able to estimate how long an execution will take to abort the execution before it has even started if the driver believes the execution cannot be completed within the timeout duration. Similarly, it enables drivers to abort an ongoing execution if it is taking too long. However, this call does not guarantee that the execution will complete or abort within the timeout duration.
By default (i.e., unless ANeuralNetworksExecution_setTimeout is called),
the timeout duration for execution is considered infinite.
The {@link ANeuralNetworksExecution} must have been created from an
{@link ANeuralNetworksCompilation} which in turn was created from
{@link ANeuralNetworksCompilation_createForDevices} with numDevices = 1,
otherwise this function will fail with ANEURALNETWORKS_BAD_DATA. If the device has a feature level reported by
{@link ANeuralNetworksDevice_getFeatureLevel} that is lower than 30, then the timeout duration hint will be ignored.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
@param execution The execution to be modified.
@param duration The maximum amount of time in nanoseconds that is expected to be spent executing a model. If this duration is exceeded, the execution may be aborted. If set to 0, the timeout duration is considered infinite.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 30.
Function nnapi_sys::ANeuralNetworksExecution_startCompute
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_startCompute(
execution: *mutANeuralNetworksExecution,
event: *mut*mutANeuralNetworksEvent
) -> c_int
```
Schedule asynchronous evaluation of the execution.
Schedules asynchronous evaluation of the execution. Once the execution has completed and the outputs are ready to be consumed, the returned event will be signaled. Use {@link ANeuralNetworksEvent_wait} to wait for that event.
ANeuralNetworksEvent_wait must be called to recuperate the resources used by the execution.
If {@link ANeuralNetworksExecution_setTimeout} was called on this execution,
and the execution is not able to complete before the timeout duration is exceeded, then execution may be aborted, in which case
{@link ANEURALNETWORKS_MISSED_DEADLINE_*} will be returned through
{@link ANeuralNetworksExecution_startCompute} or
{@link ANeuralNetworksEvent_wait} on the event object. If the device has a feature level reported by {@link ANeuralNetworksDevice_getFeatureLevel} that is lower than 30, then the timeout duration hint will be ignored.
If this execution contains a {@link ANEURALNETWORKS_WHILE} operation, and the condition model does not output false within the loop timeout duration,
then execution will be aborted and {@link ANEURALNETWORKS_MISSED_DEADLINE_*}
will be returned through {@link ANeuralNetworksEvent_wait} on the event object.
If the device can detect before the execution has started that the execution will not complete within the timeout duration, the device may choose to skip the execution and instead return {@link ANEURALNETWORKS_MISSED_DEADLINE_*}.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
See {@link ANeuralNetworksExecution_compute} for synchronous execution.
See {@link ANeuralNetworksExecution_burstCompute} for burst synchronous execution.
See {@link ANeuralNetworksExecution_startComputeWithDependencies} for asynchronous execution with dependencies.
Available since API level 27.
@param execution The execution to be scheduled and executed.
@param event The event that will be signaled on completion. event is set to NULL if there’s an error.
@return ANEURALNETWORKS_NO_ERROR if the evaluation is successfully scheduled.
Function nnapi_sys::ANeuralNetworksExecution_startComputeWithDependencies
===
```
pub unsafe extern "C" fn ANeuralNetworksExecution_startComputeWithDependencies(
execution: *mutANeuralNetworksExecution,
dependencies: *const*constANeuralNetworksEvent,
num_dependencies: u32,
duration: u64,
event: *mut*mutANeuralNetworksEvent
) -> c_int
```
Schedule asynchronous evaluation of the execution with dependencies.
The execution will wait for all the depending events to be signaled before starting the evaluation. Once the execution has completed and the outputs are ready to be consumed, the returned event will be signaled. Depending on which devices are handling the execution, the event could be backed by a sync fence.
Use {@link ANeuralNetworksEvent_wait} to wait for that event.
ANeuralNetworksEvent_wait must be called to recurperate the resources used by the execution.
If parts of the execution are scheduled on devices that do not support fenced execution,
the function call may wait for such parts to finish before returning.
The function will return an error if any of the events in dependencies is already in a bad state. After the execution is scheduled, if any of the events in dependencies does not complete normally, the execution will fail, and {@link ANeuralNetworksEvent_wait} on the returned event will return an error.
The function will return an error if any of the execution outputs has a tensor operand type that is not fully specified.
The function can be passed a timeout duration in nanoseconds. This timeout duration acts as a hint to drivers in the same way that the timeout durations in {@link ANeuralNetworksCompilation_setTimeout} and {@link ANeuralNetworksExecution_setTimeout} act as hints to drivers. The duration begins when all waitFor sync fences have been signaled, and can be used together with {@link ANeuralNetworksExecution_setTimeout} which specifies the maximum timeout duration beginning at the call to
{@link ANeuralNetworksExecution_startComputeWithDependencies}.
If the duration is non-zero, the {@link ANeuralNetworksExecution} must have been created from an {@link ANeuralNetworksCompilation} which in turn was created from
{@link ANeuralNetworksCompilation_createForDevices} with numDevices = 1,
otherwise this function will fail with ANEURALNETWORKS_BAD_DATA. If either the timeout duration from {@link ANeuralNetworksExecution_setTimeout} or the timeout duration passed to this call is exceeded, the execution may be aborted, in which case {@link ANEURALNETWORKS_MISSED_DEADLINE_*} will be returned through {@link ANeuralNetworksExecution_startComputeWithDependencies}
or {@link ANeuralNetworksEvent_wait} on the event object. If the device has a feature level reported by {@link ANeuralNetworksDevice_getFeatureLevel} that is lower than 30, then the timeout duration hints will be ignored.
If this execution contains a {@link ANEURALNETWORKS_WHILE} operation, and the condition model does not output false within the loop timeout duration,
then execution will be aborted and {@link ANEURALNETWORKS_MISSED_DEADLINE_*}
will be returned through {@link ANeuralNetworksEvent_wait} on the event object.
See {@link ANeuralNetworksExecution} for information on multithreaded usage.
See {@link ANeuralNetworksExecution_compute} for synchronous execution.
See {@link ANeuralNetworksExecution_burstCompute} for burst synchronous execution.
See {@link ANeuralNetworksExecution_startCompute} for regular asynchronous execution.
@param execution The execution to be scheduled and executed.
@param dependencies A set of depending events. The actual evaluation will not start until all the events are signaled.
@param num_dependencies The number of events in the dependencies set.
@param duration The maximum amount of time in nanoseconds that is expected to be spent executing the model after all dependencies are signaled. If set to 0, the timeout duration is considered infinite.
@param event The event that will be signaled on completion. event is set to NULL if there’s an error.
@return ANEURALNETWORKS_NO_ERROR if the evaluation is successfully scheduled.
Available since API level 30.
Function nnapi_sys::ANeuralNetworksMemoryDesc_addInputRole
===
```
pub unsafe extern "C" fn ANeuralNetworksMemoryDesc_addInputRole(
desc: *mutANeuralNetworksMemoryDesc,
compilation: *constANeuralNetworksCompilation,
index: u32,
frequency: f32
) -> c_int
```
Specify that a memory object will be playing the role of an input to an execution created from a particular compilation.
The compilation and the input index fully specify an input operand. This function may be invoked multiple times on the same memory descriptor with different input operands,
and the same input operand may be specified on multiple memory descriptors. However,
specifying the same input operand on the same memory descriptor more than once will return an error.
The dimensions of the corresponding model operands of all the roles specified by
{@link ANeuralNetworksMemoryDesc_addInputRole} and
{@link ANeuralNetworksMemoryDesc_addOutputRole} must be compatible with each other. Two dimensions are incompatible if both ranks are fully specified but have different values, or if there is at least one axis that is fully specified in both but has different values.
At least one of {@link ANeuralNetworksMemoryDesc_addInputRole} and
{@link ANeuralNetworksMemoryDesc_addOutputRole} must be called on a memory descriptor before invoking {@link ANeuralNetworksMemoryDesc_finish}.
Attempting to modify a memory descriptor once {@link ANeuralNetworksMemoryDesc_finish} has been called will return an error.
See {@link ANeuralNetworksMemoryDesc} for information on multithreaded usage.
Available since API level 30.
@param desc The memory descriptor to be modified.
@param compilation The compilation object. It must already have been finished by calling
{@link ANeuralNetworksCompilation_finish}, and must outlive the memory descriptor.
@param index The index of the input argument we are referencing from the compilation. It is an index into the inputs list passed to
{@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not the index associated with {@link ANeuralNetworksModel_addOperand}.
@param frequency A floating-point value within the range (0.0, 1.0]. Describes how likely the memory is to be used in the specified role. This is provided as a hint to optimize the case when different roles prefer different memory locations or data layouts.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksMemoryDesc_addOutputRole
===
```
pub unsafe extern "C" fn ANeuralNetworksMemoryDesc_addOutputRole(
desc: *mutANeuralNetworksMemoryDesc,
compilation: *constANeuralNetworksCompilation,
index: u32,
frequency: f32
) -> c_int
```
Specify that a memory object will be playing the role of an output to an execution created from a particular compilation.
The compilation and the output index fully specify an output operand. This function may be invoked multiple times on the same memory descriptor with different output operands,
and the same output operand may be specified on multiple memory descriptors. However,
specifying the same output operand on the same memory descriptor object more than once will return an error.
The dimensions of the corresponding model operands of all the roles specified by
{@link ANeuralNetworksMemoryDesc_addInputRole} and
{@link ANeuralNetworksMemoryDesc_addOutputRole} must be compatible with each other. Two dimensions are incompatible if both ranks are fully specified but have different values, or if there is at least one axis that is fully specified in both but has different values.
At least one of {@link ANeuralNetworksMemoryDesc_addInputRole} and
{@link ANeuralNetworksMemoryDesc_addOutputRole} must be called on the memory descriptor before invoking {@link ANeuralNetworksMemoryDesc_finish}.
Attempting to modify a memory descriptor once {@link ANeuralNetworksMemoryDesc_finish} has been called will return an error.
See {@link ANeuralNetworksMemoryDesc} for information on multithreaded usage.
Available since API level 30.
@param desc The memory descriptor to be modified.
@param compilation The compilation object. It must already have been finished by calling
{@link ANeuralNetworksCompilation_finish}, and must outlive the memory descriptor.
@param index The index of the output argument we are referencing from the compilation. It is an index into the outputs list passed to
{@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not the index associated with {@link ANeuralNetworksModel_addOperand}.
@param frequency A floating-point value within the range (0.0, 1.0]. Describes how likely the memory is to be used in the specified role. This is provided as a hint to optimize the case when multiple roles prefer different memory locations or data layouts.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksMemoryDesc_create
===
```
pub unsafe extern "C" fn ANeuralNetworksMemoryDesc_create(
desc: *mut*mutANeuralNetworksMemoryDesc
) -> c_int
```
Create a {@link ANeuralNetworksMemoryDesc} with no properties.
This only creates the memory descriptor. Its properties should be set with calls to
{@link ANeuralNetworksMemoryDesc_addInputRole},
{@link ANeuralNetworksMemoryDesc_addOutputRole}, and
{@link ANeuralNetworksMemoryDesc_setDimensions}.
{@link ANeuralNetworksMemoryDesc_finish} must be called once all properties have been set.
{@link ANeuralNetworksMemoryDesc_free} must be called once the memory descriptor is no longer needed.
Available since API level 30.
@param desc The {@link ANeuralNetworksMemoryDesc} to be created.
Set to NULL if unsuccessful.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksMemoryDesc_finish
===
```
pub unsafe extern "C" fn ANeuralNetworksMemoryDesc_finish(
desc: *mutANeuralNetworksMemoryDesc
) -> c_int
```
Indicate that we have finished modifying a memory descriptor. Required before calling
{@link ANeuralNetworksMemory_createFromDesc}.
This function must only be called once for a given memory descriptor.
See {@link ANeuralNetworksMemoryDesc} for information on multithreaded usage.
Available since API level 30.
@param desc The memory descriptor to be finished.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksMemoryDesc_free
===
```
pub unsafe extern "C" fn ANeuralNetworksMemoryDesc_free(
desc: *mutANeuralNetworksMemoryDesc
)
```
Destroy a memory descriptor.
The memory descriptor need not have been finished by a call to
{@link ANeuralNetworksMemoryDesc_finish}.
See {@link ANeuralNetworksMemoryDesc} for information on multithreaded usage.
Available since API level 30.
@param desc The memory descriptor to be destroyed. Passing NULL is acceptable and results in no operation.
Function nnapi_sys::ANeuralNetworksMemoryDesc_setDimensions
===
```
pub unsafe extern "C" fn ANeuralNetworksMemoryDesc_setDimensions(
desc: *mutANeuralNetworksMemoryDesc,
rank: u32,
dimensions: *constu32
) -> c_int
```
Set the dimensional information of the memory descriptor.
The specified dimensions must be compatible with the dimensions of the corresponding model operands of all the roles specified by {@link ANeuralNetworksMemoryDesc_addInputRole} and
{@link ANeuralNetworksMemoryDesc_addOutputRole}. Two dimensions are incompatible if both ranks are fully specified but have different values, or if there is at least one axis that is fully specified in both but has different values.
Attempting to modify a memory descriptor once {@link ANeuralNetworksMemoryDesc_finish} has been called will return an error.
See {@link ANeuralNetworksMemoryDesc} for information on multithreaded usage.
Available since API level 30.
@param desc The memory descriptor to be modified.
@param rank The number of dimensions. Must be 0 for scalars.
@param dimensions An array of dimensions. An entry with the value 0 indicates that the corresponding axis has an unknown size.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksMemory_copy
===
```
pub unsafe extern "C" fn ANeuralNetworksMemory_copy(
src: *constANeuralNetworksMemory,
dst: *constANeuralNetworksMemory
) -> c_int
```
Copies data from one memory object to another.
If at most one of the src and dst is created from {@link ANeuralNetworksMemory_createFromDesc},
the src and dst must have the same logical size:
* If the memory is created from {@link ANeuralNetworksMemory_createFromFd}, or if it is created from {@link ANeuralNetworksMemory_createFromAHardwareBuffer} with format of AHARDWAREBUFFER_FORMAT_BLOB, the logical size equals the size of the memory.
* If the memory is created from {@link ANeuralNetworksMemory_createFromAHardwareBuffer} with a format other than AHARDWAREBUFFER_FORMAT_BLOB, the logical size equals the size when there is no padding and the data is tightly packed. This function may fail if the AHardwareBuffer cannot be accessed.
* If the memory is created from {@link ANeuralNetworksMemory_createFromDesc}, the logical size equals the size indicated by the {@link OperandCode} multiplied by the number of elements. This function will fail if the number of elements is unknown.
If both src and dst are created from {@link ANeuralNetworksMemory_createFromDesc}, they must have compatible dimensions. Two dimensions are incompatible if both ranks are fully specified but have different values, or if there is at least one axis that is fully specified in both but has different values. The dst may have unspecified dimensions or rank. In such a case, the dimensions of dst will get updated according to the dimensions of the src.
In both cases, if the src is created from {@link ANeuralNetworksMemory_createFromDesc}, it must have been used as an output in a successful execution, or used as the destination memory in a successful {@link ANeuralNetworksMemory_copy}.
The src and dst may have different data layout, in which case the data copying is performed logically with data layout transformation.
Available since API level 30.
@param src The source memory object.
@param dst The destination memory object.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksMemory_createFromAHardwareBuffer
===
```
pub unsafe extern "C" fn ANeuralNetworksMemory_createFromAHardwareBuffer(
ahwb: *constAHardwareBuffer,
memory: *mut*mutANeuralNetworksMemory
) -> c_int
```
Creates a shared memory object from an AHardwareBuffer handle.
If the shared memory is backed by an AHardwareBuffer of AHARDWAREBUFFER_FORMAT_BLOB format, it can be used the same way as shared memory created from a file handle. See
{@link ANeuralNetworksMemory} for a description on how to use this shared memory.
If the shared memory is backed by an AHardwareBuffer of a format other than AHARDWAREBUFFER_FORMAT_BLOB, it can only be used for Model inputs and outputs.
When calling {@link ANeuralNetworksExecution_setInputFromMemory} or
{@link ANeuralNetworksExecution_setOutputFromMemory} with the shared memory, both offset and length must be set to zero and the entire memory region will be associated with the specified input or output operand. There is no guarantee that an arbitrary AHardwareBuffer_Format and AHardwareBuffer_UsageFlags combination can be used by arbitrary devices. The execution will fail if the selected set of devices cannot consume the buffer.
Calling {@link ANeuralNetworksModel_setOperandValueFromMemory} with shared memory backed by an AHardwareBuffer of a format other than AHARDWAREBUFFER_FORMAT_BLOB is disallowed.
The provided AHardwareBuffer must outlive the ANeuralNetworksMemory object.
Available since API level 29.
@param ahwb The AHardwareBuffer handle.
@param memory The memory object to be created.
Set to NULL if unsuccessful.
@return ANEURALNETWORKS_NO_ERROR if the request completed normally.
@see AHardwareBuffer
Function nnapi_sys::ANeuralNetworksMemory_createFromDesc
===
```
pub unsafe extern "C" fn ANeuralNetworksMemory_createFromDesc(
desc: *constANeuralNetworksMemoryDesc,
memory: *mut*mutANeuralNetworksMemory
) -> c_int
```
Creates a memory object from a memory descriptor.
The memory object is created with an uninitialized buffer. A memory object with an uninitialized buffer may only be used according to the roles specified by {@link ANeuralNetworksMemoryDesc_addOutputRole}, or as the destination memory in {@link ANeuralNetworksMemory_copy}. The buffer of a memory object is initialized after the memory object is used as an output in a successful execution, or used as the destination memory in a successful
{@link ANeuralNetworksMemory_copy}. A memory object with an initialized buffer may be used according to all roles specified in {@link ANeuralNetworksMemoryDesc}, or as the source or destination memory in {@link ANeuralNetworksMemory_copy}. The buffer of a memory object will return to the uninitialized state if the memory object is used as an output in a failed execution, or used as the destination memory in a failed {@link ANeuralNetworksMemory_copy}.
The dimensions of the memory descriptor are deduced from the dimensions of the corresponding model operands of all the roles specified by {@link ANeuralNetworksMemoryDesc_addInputRole} and
{@link ANeuralNetworksMemoryDesc_addOutputRole}, as well as the dimensions set by the call to
{@link ANeuralNetworksMemoryDesc_setDimensions}, if any. The memory descriptor may have unspecified dimensions or rank. In such a case, the same memory object may be used with different shapes of outputs in different executions. When the memory is used as an input, the input shape must be the same as the output shape from the last execution using this memory object as an output, or the last {@link ANeuralNetworkMemory_copy} using this memory object as the destination memory. Creating a memory object with unspecified dimensions or rank may fail for certain sets of roles.
Using the memory in roles or shapes that are not compatible with the rules specified above will return an error.
When calling {@link ANeuralNetworksExecution_setInputFromMemory} or
{@link ANeuralNetworksExecution_setOutputFromMemory} with the memory object,
both offset and length must be set to zero and the entire memory region will be associated with the specified input or output operand.
Calling {@link ANeuralNetworksModel_setOperandValueFromMemory} with the memory created from this function will return an error.
{@link ANeuralNetworksMemory_free} must be called once the memory is no longer needed.
Attempting to create memory from an unfinished memory descriptor will return an error.
The provided {@link ANeuralNetworksMemoryDesc} need not outlive the {@link ANeuralNetworksMemory}
object.
Available since API level 30.
@param desc The memory descriptor.
@param memory The memory object to be created.
Set to NULL if unsuccessful.
@return ANEURALNETWORKS_NO_ERROR if successful; ANEURALNETWORKS_OP_FAILED if the memory is created with unspecified dimensions or rank and it is not supported for this set of roles.
Function nnapi_sys::ANeuralNetworksMemory_createFromFd
===
```
pub unsafe extern "C" fn ANeuralNetworksMemory_createFromFd(
size: usize,
protect: c_int,
fd: c_int,
offset: usize,
memory: *mut*mutANeuralNetworksMemory
) -> c_int
```
Creates a shared memory object from a file descriptor.
The shared memory is backed by a file descriptor via mmap.
See {@link ANeuralNetworksMemory} for a description on how to use this shared memory.
Available since API level 27.
@param size The requested size in bytes.
Must not be larger than the file size.
@param prot The desired memory protection for the mapping.
It is either PROT_NONE or the bitwise OR of one or more of the following flags: PROT_READ, PROT_WRITE.
@param fd The requested file descriptor.
The file descriptor has to be mmap-able. The file descriptor will be duplicated.
@param offset The offset to the beginning of the file of the area to map.
The offset has to be aligned to a page size.
@param memory The memory object to be created.
Set to NULL if unsuccessful.
@return ANEURALNETWORKS_NO_ERROR if the request completed normally.
Function nnapi_sys::ANeuralNetworksMemory_free
===
```
pub unsafe extern "C" fn ANeuralNetworksMemory_free(
memory: *mutANeuralNetworksMemory
)
```
Delete a memory object.
Destroys the object used by the run time to keep track of the memory.
This will free the underlying actual memory if no other code has open handles to this memory.
Available since API level 27.
@param memory The memory object to be freed. Passing NULL is acceptable and results in no operation.
Function nnapi_sys::ANeuralNetworksModel_addOperand
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_addOperand(
model: *mutANeuralNetworksModel,
type_: *constANeuralNetworksOperandType
) -> c_int
```
Add an operand to a model.
The order in which the operands are added is important. The first one added to a model will have the index value 0, the second 1, etc. These indexes are used as operand identifiers in
{@link ANeuralNetworksModel_addOperation},
{@link ANeuralNetworksModel_identifyInputsAndOutputs},
{@link ANeuralNetworksModel_setOperandValue},
{@link ANeuralNetworksModel_setOperandValueFromMemory},
{@link ANeuralNetworksExecution_setInput},
{@link ANeuralNetworksExecution_setInputFromMemory},
{@link ANeuralNetworksExecution_setOutput},
{@link ANeuralNetworksExecution_setOutputFromMemory} and
{@link ANeuralNetworksExecution_setOperandValue}.
Every operand must be referenced in exactly one of the following ways:* It is identified as a model input with
{@link ANeuralNetworksModel_identifyInputsAndOutputs}.
* It is identified as a constant with
{@link ANeuralNetworksModel_setOperandValue} or
{@link ANeuralNetworksModel_setOperandValueFromMemory}.
* It is identified as an output of exactly one operation with
{@link ANeuralNetworksModel_addOperation}.
An operand that is identified as a model input or as a constant must not also be identified as a model output with
{@link ANeuralNetworksModel_identifyInputsAndOutputs}.
To build a model that can accommodate inputs of various sizes, as you may want to do for a CNN, leave unspecified the dimensions that will vary at run time. If you do so, fully specify dimensions when calling {@link ANeuralNetworksExecution_setInput} or
{@link ANeuralNetworksExecution_setInputFromMemory}.
Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been called will return an error.
See {@link ANeuralNetworksModel} for information on multithreaded usage.
Available since API level 27.
@param model The model to be modified.
@param type The {@link ANeuralNetworksOperandType} that describes the shape of the operand. Neither the {@link ANeuralNetworksOperandType}
nor the dimensions it points to need to outlive the call to
{@link ANeuralNetworksModel_addOperand}.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksModel_addOperation
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_addOperation(
model: *mutANeuralNetworksModel,
type_: ANeuralNetworksOperationType,
inputCount: u32,
inputs: *constu32,
outputCount: u32,
outputs: *constu32
) -> c_int
```
Add an operation to a model.
@param model The model to be modified.
@param type The {@link ANeuralNetworksOperationType} of the operation.
@param inputCount The number of entries in the inputs array.
@param inputs An array of indexes identifying each operand.
@param outputCount The number of entries in the outputs array.
@param outputs An array of indexes identifying each operand.
The operands specified by inputs and outputs must have been previously added by calls to {@link ANeuralNetworksModel_addOperand}.
Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been called will return an error.
See {@link ANeuralNetworksModel} for information on multithreaded usage.
Available since API level 27.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksModel_create
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_create(
model: *mut*mutANeuralNetworksModel
) -> c_int
```
Create an empty {@link ANeuralNetworksModel}.
This only creates the object. Computation is performed once
{@link ANeuralNetworksExecution_burstCompute},
{@link ANeuralNetworksExecution_compute},
{@link ANeuralNetworksExecution_startCompute} or
{@link ANeuralNetworksExecution_startComputeWithDependencies} is invoked.
The model should be constructed with calls to
{@link ANeuralNetworksModel_addOperation} and
{@link ANeuralNetworksModel_addOperand}
{@link ANeuralNetworksModel_finish} should be called once the model has been fully constructed.
{@link ANeuralNetworksModel_free} should be called once the model is no longer needed.
Available since API level 27.
@param model The {@link ANeuralNetworksModel} to be created.
Set to NULL if unsuccessful.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksModel_finish
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_finish(
model: *mutANeuralNetworksModel
) -> c_int
```
Indicate that we have finished modifying a model. Required before calling {@link ANeuralNetworksCompilation_create} and
{@link ANeuralNetworksCompilation_createForDevices}.
An application must ensure that no other thread uses the model at the same time.
This function must only be called once for a given model.
See {@link ANeuralNetworksModel} for information on multithreaded usage.
Available since API level 27.
@param model The model to be finished.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksModel_free
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_free(
model: *mutANeuralNetworksModel
)
```
Destroy a model.
The model need not have been finished by a call to
{@link ANeuralNetworksModel_finish}.
See {@link ANeuralNetworksModel} for information on multithreaded usage.
Available since API level 27.
@param model The model to be destroyed. Passing NULL is acceptable and results in no operation.
Function nnapi_sys::ANeuralNetworksModel_getSupportedOperationsForDevices
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_getSupportedOperationsForDevices(
model: *constANeuralNetworksModel,
devices: *const*constANeuralNetworksDevice,
numDevices: u32,
supportedOps: *mutbool
) -> c_int
```
Get the supported operations for a specified set of devices. If multiple devices are selected, the supported operation list is a union of supported operations of all selected devices.
@param model The model to be queried.
@param devices The set of devices. Must not contain duplicates.
@param numDevices The number of devices in the set.
@param supportedOps The boolean array to be filled. True means supported. The size of the boolean array must be at least as large as the number of operations in the model. The order of elements in the supportedOps array matches the order in which the corresponding operations were added to the model.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 29.
Function nnapi_sys::ANeuralNetworksModel_identifyInputsAndOutputs
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_identifyInputsAndOutputs(
model: *mutANeuralNetworksModel,
inputCount: u32,
inputs: *constu32,
outputCount: u32,
outputs: *constu32
) -> c_int
```
Specifies which operands will be the model’s inputs and outputs. Every model must have at least one input and one output.
An operand cannot be used for both input and output. Doing so will return an error.
@param model The model to be modified.
@param inputCount The number of entries in the inputs array.
@param inputs An array of indexes identifying the input operands.
@param outputCount The number of entries in the outputs array.
@param outputs An array of indexes identifying the output operands.
The operands specified by inputs and outputs must have been previously added by calls to {@link ANeuralNetworksModel_addOperand}.
Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been called will return an error.
See {@link ANeuralNetworksModel} for information on multithreaded usage.
Available since API level 27.
Function nnapi_sys::ANeuralNetworksModel_relaxComputationFloat32toFloat16
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_relaxComputationFloat32toFloat16(
model: *mutANeuralNetworksModel,
allow: bool
) -> c_int
```
Specifies whether {@link ANEURALNETWORKS_TENSOR_FLOAT32} is allowed to be calculated with range and/or precision as low as that of the IEEE 754 16-bit floating-point format. By default, {@link ANEURALNETWORKS_TENSOR_FLOAT32}
must be calculated using at least the range and precision of the IEEE 754 32-bit floating-point format.
The relaxComputationFloat32toFloat16 setting of the main model of a compilation overrides the values of the referenced models.
@param model The model to be modified.
@param allow ‘true’ indicates {@link ANEURALNETWORKS_TENSOR_FLOAT32} may be calculated with range and/or precision as low as that of the IEEE 754 16-bit floating point format. ‘false’ indicates
{@link ANEURALNETWORKS_TENSOR_FLOAT32} must be calculated using at least the range and precision of the IEEE 754 32-bit floating point format.
Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been called will return an error.
Available since API level 28.
See {@link ANeuralNetworksModel} for information on multithreaded usage.
Function nnapi_sys::ANeuralNetworksModel_setOperandSymmPerChannelQuantParams
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_setOperandSymmPerChannelQuantParams(
model: *mutANeuralNetworksModel,
index: i32,
channelQuant: *constANeuralNetworksSymmPerChannelQuantParams
) -> c_int
```
Sets an operand’s per channel quantization parameters.
Sets parameters required by a tensor of type
{@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}.
This function must be called for every tensor of type
{@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} before calling {@link ANeuralNetworksModel_finish}.
Available since API level 29.
@param model The model to be modified.
@param index The index of the model operand we’re setting.
@param channelQuant The per channel quantization parameters for the operand.
No memory in this struct needs to outlive the call to this function.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksModel_setOperandValue
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_setOperandValue(
model: *mutANeuralNetworksModel,
index: i32,
buffer: *constc_void,
length: usize
) -> c_int
```
Sets an operand to a constant value.
Values of length smaller or equal to
{@link ANEURALNETWORKS_MAX_SIZE_OF_IMMEDIATELY_COPIED_VALUES}
are immediately copied into the model.
For values of length greater than
{@link ANEURALNETWORKS_MAX_SIZE_OF_IMMEDIATELY_COPIED_VALUES}, a pointer to the buffer is stored within the model. The application must not change the content of this region until all executions using this model have completed. As the data may be copied during processing, modifying the data after this call yields undefined results. The provided buffer must outlive this model.
For large tensors, using {@link ANeuralNetworksModel_setOperandValueFromMemory}
is likely to be more efficient.
To indicate that an optional operand should be considered missing,
pass nullptr for buffer and 0 for length.
Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been called will return an error.
See {@link ANeuralNetworksModel} for information on multithreaded usage.
Available since API level 27.
@param model The model to be modified.
@param index The index of the model operand we’re setting.
@param buffer A pointer to the data to use.
@param length The size in bytes of the data value.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksModel_setOperandValueFromMemory
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_setOperandValueFromMemory(
model: *mutANeuralNetworksModel,
index: i32,
memory: *constANeuralNetworksMemory,
offset: usize,
length: usize
) -> c_int
```
Sets an operand to a value stored in a memory object.
The content of the memory is not copied. A reference to that memory is stored inside the model. The application must not change the content of the memory region until all executions using this model have completed. As the data may be copied during processing, modifying the data after this call yields undefined results.
The provided memory must outlive this model.
To indicate that an optional operand should be considered missing,
use {@link ANeuralNetworksModel_setOperandValue} instead, passing nullptr for buffer.
It is disallowed to set an operand value with shared memory backed by an AHardwareBuffer of a format other than AHARDWAREBUFFER_FORMAT_BLOB.
It is disallowed to set an operand value with memory created from
{@link ANeuralNetworksMemory_createFromDesc}.
Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been called will return an error.
See {@link ANeuralNetworksModel} for information on multithreaded usage.
See {@link ANeuralNetworksMemory_createFromAHardwareBuffer} for information on AHardwareBuffer usage.
Available since API level 27.
@param model The model to be modified.
@param index The index of the model operand we’re setting.
@param buffer A pointer to the data to use.
@param memory The memory containing the data.
@param offset This specifies the location of the data within the memory.
The offset is in bytes from the start of memory.
@param length The size in bytes of the data value.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworksModel_setOperandValueFromModel
===
```
pub unsafe extern "C" fn ANeuralNetworksModel_setOperandValueFromModel(
model: *mutANeuralNetworksModel,
index: i32,
value: *constANeuralNetworksModel
) -> c_int
```
Sets an operand to a value that is a reference to another NNAPI model.
The referenced model must already have been finished by a call to
{@link ANeuralNetworksModel_finish}.
The {@link ANeuralNetworksModel_relaxComputationFloat32toFloat16} setting of referenced models is overridden by that setting of the main model of a compilation.
The referenced model must outlive the model referring to it.
Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been called will return an error.
See {@link ANeuralNetworksModel} for information on multithreaded usage.
Available since API level 30.
@param model The model to be modified.
@param index The index of the model operand we’re setting.
@param value The model to be referenced.
@return ANEURALNETWORKS_NO_ERROR if successful.
Function nnapi_sys::ANeuralNetworks_getDefaultLoopTimeout
===
```
pub unsafe extern "C" fn ANeuralNetworks_getDefaultLoopTimeout(
) -> u64
```
Get the default timeout value for WHILE loops.
@return The default timeout value in nanoseconds.
Available since API level 30.
Function nnapi_sys::ANeuralNetworks_getDevice
===
```
pub unsafe extern "C" fn ANeuralNetworks_getDevice(
devIndex: u32,
device: *mut*mutANeuralNetworksDevice
) -> c_int
```
Get the representation of the specified device.
@param devIndex The index of the specified device. Must be less than the number of available devices.
@param device The representation of the specified device.
The same representation will always be returned for the specified device.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 29.
Function nnapi_sys::ANeuralNetworks_getDeviceCount
===
```
pub unsafe extern "C" fn ANeuralNetworks_getDeviceCount(
numDevices: *mutu32
) -> c_int
```
Get the number of available devices.
@param numDevices Used to return the number of devices.
@return ANEURALNETWORKS_NO_ERROR if successful.
Available since API level 29.
Function nnapi_sys::ANeuralNetworks_getMaximumLoopTimeout
===
```
pub unsafe extern "C" fn ANeuralNetworks_getMaximumLoopTimeout(
) -> u64
```
Get the maximum timeout value for WHILE loops.
@return The maximum timeout value in nanoseconds.
Available since API level 30.
Type Alias nnapi_sys::DeviceTypeCode
===
```
pub type DeviceTypeCode = c_int;
```
Device types.
The type of NNAPI device.
Type Alias nnapi_sys::DurationCode
===
```
pub type DurationCode = c_uint;
```
Different duration measurements.
Durations are measured in nanoseconds.
Available since API level 29.
Type Alias nnapi_sys::FuseCode
===
```
pub type FuseCode = c_uint;
```
Fused activation function types.
Available since API level 27.
Type Alias nnapi_sys::PaddingCode
===
```
pub type PaddingCode = c_int;
```
Implicit padding algorithms.
Available since API level 27.
Type Alias nnapi_sys::PreferenceCode
===
```
pub type PreferenceCode = c_int;
```
Execution preferences.
Available since API level 27.
Type Alias nnapi_sys::PriorityCode
===
```
pub type PriorityCode = c_uint;
```
Relative execution priority.
Available since API level 30.
Type Alias nnapi_sys::__darwin_mbstate_t
===
```
pub type __darwin_mbstate_t = __mbstate_t;
```
Aliased Type
---
```
union __darwin_mbstate_t {
pub __mbstate8: [i8; 128],
pub _mbstateL: i64,
}
```
Fields
---
`__mbstate8: [i8; 128]``_mbstateL: i64`Trait Implementations
---
### impl Clone for __mbstate_t
#### fn clone(&self) -> __mbstate_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Type Alias nnapi_sys::__darwin_pthread_attr_t
===
```
pub type __darwin_pthread_attr_t = _opaque_pthread_attr_t;
```
Aliased Type
---
```
struct __darwin_pthread_attr_t {
pub __sig: i64,
pub __opaque: [i8; 56],
}
```
Fields
---
`__sig: i64``__opaque: [i8; 56]`Trait Implementations
---
### impl Clone for _opaque_pthread_attr_t
#### fn clone(&self) -> _opaque_pthread_attr_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Type Alias nnapi_sys::__darwin_pthread_cond_t
===
```
pub type __darwin_pthread_cond_t = _opaque_pthread_cond_t;
```
Aliased Type
---
```
struct __darwin_pthread_cond_t {
pub __sig: i64,
pub __opaque: [i8; 40],
}
```
Fields
---
`__sig: i64``__opaque: [i8; 40]`Trait Implementations
---
### impl Clone for _opaque_pthread_cond_t
#### fn clone(&self) -> _opaque_pthread_cond_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Type Alias nnapi_sys::__darwin_pthread_condattr_t
===
```
pub type __darwin_pthread_condattr_t = _opaque_pthread_condattr_t;
```
Aliased Type
---
```
struct __darwin_pthread_condattr_t {
pub __sig: i64,
pub __opaque: [i8; 8],
}
```
Fields
---
`__sig: i64``__opaque: [i8; 8]`Trait Implementations
---
### impl Clone for _opaque_pthread_condattr_t
#### fn clone(&self) -> _opaque_pthread_condattr_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Type Alias nnapi_sys::__darwin_pthread_mutex_t
===
```
pub type __darwin_pthread_mutex_t = _opaque_pthread_mutex_t;
```
Aliased Type
---
```
struct __darwin_pthread_mutex_t {
pub __sig: i64,
pub __opaque: [i8; 56],
}
```
Fields
---
`__sig: i64``__opaque: [i8; 56]`Trait Implementations
---
### impl Clone for _opaque_pthread_mutex_t
#### fn clone(&self) -> _opaque_pthread_mutex_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Type Alias nnapi_sys::__darwin_pthread_mutexattr_t
===
```
pub type __darwin_pthread_mutexattr_t = _opaque_pthread_mutexattr_t;
```
Aliased Type
---
```
struct __darwin_pthread_mutexattr_t {
pub __sig: i64,
pub __opaque: [i8; 8],
}
```
Fields
---
`__sig: i64``__opaque: [i8; 8]`Trait Implementations
---
### impl Clone for _opaque_pthread_mutexattr_t
#### fn clone(&self) -> _opaque_pthread_mutexattr_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Type Alias nnapi_sys::__darwin_pthread_once_t
===
```
pub type __darwin_pthread_once_t = _opaque_pthread_once_t;
```
Aliased Type
---
```
struct __darwin_pthread_once_t {
pub __sig: i64,
pub __opaque: [i8; 8],
}
```
Fields
---
`__sig: i64``__opaque: [i8; 8]`Trait Implementations
---
### impl Clone for _opaque_pthread_once_t
#### fn clone(&self) -> _opaque_pthread_once_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Type Alias nnapi_sys::__darwin_pthread_rwlock_t
===
```
pub type __darwin_pthread_rwlock_t = _opaque_pthread_rwlock_t;
```
Aliased Type
---
```
struct __darwin_pthread_rwlock_t {
pub __sig: i64,
pub __opaque: [i8; 192],
}
```
Fields
---
`__sig: i64``__opaque: [i8; 192]`Trait Implementations
---
### impl Clone for _opaque_pthread_rwlock_t
#### fn clone(&self) -> _opaque_pthread_rwlock_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Type Alias nnapi_sys::__darwin_pthread_rwlockattr_t
===
```
pub type __darwin_pthread_rwlockattr_t = _opaque_pthread_rwlockattr_t;
```
Aliased Type
---
```
struct __darwin_pthread_rwlockattr_t {
pub __sig: i64,
pub __opaque: [i8; 16],
}
```
Fields
---
`__sig: i64``__opaque: [i8; 16]`Trait Implementations
---
### impl Clone for _opaque_pthread_rwlockattr_t
#### fn clone(&self) -> _opaque_pthread_rwlockattr_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Type Alias nnapi_sys::_bindgen_ty_1
===
```
pub type _bindgen_ty_1 = c_uint;
```
For {@link ANeuralNetworksModel_setOperandValue}, values with a length smaller or equal to this will be immediately copied into the model. The size is in bytes.
Available since API level 27.
Type Alias nnapi_sys::_bindgen_ty_2
===
```
pub type _bindgen_ty_2 = c_uint;
```
For {@link ANeuralNetworksCompilation_setCaching}, specify the size of the cache token required from the application. The size is in bytes.
Available since API level 29.
Union nnapi_sys::__mbstate_t
===
```
#[repr(C)]
pub union __mbstate_t {
pub __mbstate8: [c_char; 128],
pub _mbstateL: c_longlong,
}
```
Fields
---
`__mbstate8: [c_char; 128]``_mbstateL: c_longlong`Trait Implementations
---
### impl Clone for __mbstate_t
#### fn clone(&self) -> __mbstate_t
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Auto Trait Implementations
---
### impl RefUnwindSafe for __mbstate_t
### impl Send for __mbstate_t
### impl Sync for __mbstate_t
### impl Unpin for __mbstate_t
### impl UnwindSafe for __mbstate_t
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. |
SDaA | cran | R | Package ‘SDaA’
October 12, 2022
Type Package
Title Sampling: Design and Analysis
Version 0.1-5
Date 2022-04-11
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Functions and Datasets from Lohr, S. (1999), Sampling:
Design and Analysis, Duxbury.
Suggests survey, ggplot2 (>= 0.8.2)
License GPL-3
LazyData Yes
Collate 'agpop.R' 'agsrs.R' 'agstrat.R' 'anthrop.R' 'anthsrs.R'
'anthuneq.R' 'audit.R' 'books.R' 'certify.R' 'coots.R'
'counties.R' 'divorce.R' 'golfsrs.R' 'htpop.R' 'htsrs.R'
'htstrat.R' 'journal.R' 'lahiri.design.R' 'measles.R' 'ncvs.R'
'nybight.R' 'otters.R' 'ozone.R' 'samples.R' 'seals.R'
'selectrs.R' 'statepop.R' 'statepps.R' 'syc.R' 'teachers.R'
'teachmi.R' 'teachnr.R' 'winter.R'
Encoding UTF-8
Repository CRAN
RoxygenNote 7.1.2
NeedsCompilation no
Date/Publication 2022-04-11 16:02:30 UTC
R topics documented:
agpo... 2
agsr... 3
agstra... 4
anthro... 5
anthsr... 6
anthune... 7
audi... 7
book... 8
certif... 9
coot... 10
countie... 11
divorc... 12
golfsr... 13
htpo... 14
htsr... 14
htstra... 15
journa... 15
lahiri.desig... 16
measle... 16
ncv... 17
nybigh... 18
otter... 19
ozon... 20
sample... 21
seal... 22
selectr... 22
statepo... 23
statepp... 24
sy... 24
teacher... 26
teachm... 27
teachn... 27
winte... 28
agpop Data from the U.S. 1992 Census of Agriculture
Description
Data from the U.S. 1992 Census of Agriculture
Usage
agpop
Format
Data frame with the following 15 variables:
county county name
state state abbreviation
acres92 number of acres devoted to farms, 1992
acres87 number of acres devoted to farms, 1987
acres82 number of acres devoted to farms, 1982
farms92 number of farms, 1992
farms87 number of farms, 1987
farms82 number of farms, 1982
largef92 number of farms with 1000 acres or more, 1992
largef87 number of farms with 1000 acres or more, 1987
largef82 number of farms with 1000 acres or more, 1982
smallf92 number of farms with 9 acres or fewer, 1992
smallf87 number of farms with 9 acres or fewer, 1987
smallf82 number of farms with 9 acres or fewer, 1982
region factor with levels S (south), W (west), NC (north central), NE (northeast)
Source
U.S. 1992 Census of Agriculture
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 437.
agsrs Data from a SRS of size 300 from the U.S. 1992 Census of Agriculture
Description
Data from a SRS of size 300 from the U.S. 1992 Census of Agriculture
Usage
agsrs
Format
Data frame with the following 14 variables:
county county name
state state abbreviation
acres92 number of acres devoted to farms, 1992
acres87 number of acres devoted to farms, 1987
acres82 number of acres devoted to farms, 1982
farms92 number of farms, 1992
farms87 number of farms, 1987
farms82 number of farms, 1982
largef92 number of farms with 1000 acres or more, 1992
largef87 number of farms with 1000 acres or more, 1987
largef82 number of farms with 1000 acres or more, 1982
smallf92 number of farms with 9 acres or fewer, 1992
smallf87 number of farms with 9 acres or fewer, 1987
smallf82 number of farms with 9 acres or fewer, 1982
Source
U.S. 1992 Census of Agriculture
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 437.
agstrat Data from a stratified random sample of size 300 from the U.S. 1992
Census of Agriculture.
Description
Data from a stratified random sample of size 300 from the U.S. 1992 Census of Agriculture.
Usage
agstrat
Format
Data frame with the following 17 variables:
county county name
state state abbreviation
acres92 number of acres devoted to farms, 1992
acres87 number of acres devoted to farms, 1987
acres82 number of acres devoted to farms, 1982
farms92 number of farms, 1992
farms87 number of farms, 1987
farms82 number of farms, 1982
largef92 number of farms with 1000 acres or more, 1992
largef87 number of farms with 1000 acres or more, 1987
largef82 number of farms with 1000 acres or more, 1982
smallf92 number of farms with 9 acres or fewer, 1992
smallf87 number of farms with 9 acres or fewer, 1987
smallf82 number of farms with 9 acres or fewer, 1982
region factor with levels S (south), W (west), NC (north central), NE (northeast)
rn random numbers used to select sample in each stratum
weight sampling weighs for each county in sample
Source
U.S. 1992 Census of Agriculture
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 437.
anthrop Length of Left Middle Finger and Height for 3000 Criminals
Description
Length of left middle finger and height for 3000 criminals
Usage
anthrop
Format
Data frame with the following 2 variables:
finger length of left middle finger (cm)
height height (inches)
Source
Macdonell, <NAME>. (1901). On criminal anthropometry and the identification of criminals, Biometrika,
1: 177–227.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 438.
anthsrs Length of Left Middle Finger and Height for an SRS of Size 200
Description
Length of left middle finger and height for an SRS of 200 criminals from the anthrop dataset
Usage
anthsrs
Format
Data frame with the following 2 variables:
finger length of left middle finger (cm)
height height (inches)
Source
<NAME>. (1901). On criminal anthropometry and the identification of criminals, Biometrika,
1: 177–227.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 438.
anthuneq Length of Left Middle Finger and Height for an Unequal-Probability
Sample of Size 200
Description
Length of left middle finger and height for an unequal-probability sample of criminals of size 200
from the anthrop dataset. The probability of selection, psi[i], was proportional to 24 for y < 65, 12
for y = 65, 2 for y = 66 or 67, and 1 for y > 67.
Usage
anthuneq
Format
Data frame with the following 3 variables:
finger length of left middle finger (cm)
height height (inches)
prob probability of selection
Source
<NAME>. (1901). On criminal anthropometry and the identification of criminals, Biometrika,
1: 177–227.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 438.
audit Selection of Accounts for Audit in Example 6.11
Description
Selection of Accounts for Audit in Example 6.11
Usage
audit
Format
Data frame with the following 6 variables:
account audit unit
bookval book value of account
cumbv cumulative book value
rn1 random number 1 selecting account
rn2 random number 2 selecting account
rn3 random number 3 selecting account
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 439.
books Data from Home Owner’s Survey on Total Number of Books
Description
Data from home owner’s survey on total number of books
Usage
books
Format
Data frame with the following 6 variables:
shelf shelf number
number number of the book selected
purchase purchase cost of the book
replace replacement cost of book
Note
Used in Exercise 6 of Chapter 5.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 439.
certify Data from the 1994 Survey of ASA Membership on Certification
Description
Data from the 1994 Survey of ASA Membership on Certification
Usage
certify
Format
Data frame with the following 11 variables:
certify should the ASA develop some form of certification? factor with levels yes, possibly,
noopinion, unlikely and no
approve would you approve of a certification program similar to that described in the July 1993
issue of Amstat News? factor with levels yes, possibly, noopinion, unlikely and no
speccert Should there be specific certification programs for statistics subdisciplines? factor with
levels yes, possibly, noopinion, unlikely and no
wouldyou If the ASA developed a certification program, would you attempt to become certified?
factor with levels yes, possibly, noopinion, unlikely and no
recert If the ASA offered certification, should recertification be required every several years? fac-
tor with levels yes, possibly, noopinion, unlikely and no
subdisc Major subdiscipline; factor with levels BA (Bayesian), BE (business and economic), BI
(biometrics), BP (biopharmaceutical), CM (computing), EN (environment), EP (epidemiology),
GV (government), MR (marketing), PE (physical and engineering), QP (quality and productivity),
SE (statistical education), SG (statistical graphics), SP (sports), SR (survey research), SS (social
statistics), TH (teaching statistics in health sciences), O (other)
college Highest collegiate degree; factor with levels B (BS or BA), M (MS), N (none), P (PhD) and
O (other)
employ Employment status; factor with levels E (employed), I (in school), R (retired), S (self-
employed), U (unemployed) and O (other)
workenv Primary work environment; factor with levels A (academia), G (government), I (industry),
O (other)
workact Primary work activity; factor with levels C (consultant), E (educator), P (practitioner), R
(researcher), S (student) and O (other)
yearsmem For how many years have you been a member of ASA?
Note
The full dataset is on Statlib
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 439. http://lib.stat.
cmu.edu/asacert/certsurvey
coots Egg Size from Coots
Description
Selected information on egg size from coots, from a study by Arnold (1991). Data courtesy of Todd
Arnold.
Usage
coots
Format
Data frame with the following 11 variables:
clutch clutch number from which eggs were subsampled
csize number of eggs in clutch (Mi)
length length of egg (mm)
breadth maximum breadth of egg (mm)
volume calculated as 0.00507 x length x breadth^2
tmt received supplemental feeding? factor with levels no and yes
Note
Not all observations are used for this data set, so results may not agree with those in Arnold (1991)
Source
Arnold, T.W. (1991). Intraclutch variation in egg size of American Coots, The Condor, 93: 19–27
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 440.
counties Data from an SRS of 100 of the 3141 Counties in the U.S.
Description
Data from an SRS of 100 of the 3141 Counties in the U.S.
Usage
counties
Format
Data frame with the following 18 variables:
RN random number used to select the country
state state (two-letter abbreviation)
county county
landarea land area, 1990 (square miles)
totpop total population, 1992
physician active nonfederal physicians on Jan. 1, 1990
enroll school enrollment in elementary or high school, 1990
percpub percent of school enrollment in public schools
civlabor civilian labor force, 1991
unemp number unemployed, 1991
farmpop farm population, 1990
numfarm number of farms, 1987
farmacre acreage in farms, 1987
fedgrant total expenditures in federal funds and grants, 1992 (millions of dollars)
fedciv civilians employed by federal government, 1990
milit military personnel, 1990
veterans number of veterans, 1990
percviet percentage of veterans from Vietnam era, 1990
Source
U.S. Bureau of Census, 1994
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 440.
divorce Data from a Sample of Divorce Records
Description
Data from a sample of divorce records for states in the Divorce Registration Area (National Center
for Health Statistics 1987)
Usage
divorce
Format
Data frame with the following 20 variables:
state state name
abbrev state abbreviation
samprate sampling rate for state
numrecs number of records sampled in state
hsblt20 number of records in sample with husband’s age < 20
hsb2024 number of records with 20 <= husband’s age <= 24
hsb2529 number of records with 25 <= husband’s age <= 29
hsb3034 number of records with 30 <= husband’s age <= 34
hsb3539 number of records with 35 <= husband’s age <= 39
hsb4044 number of records with 40 <= husband’s age <= 44
hsb4549 number of records with 45 <= husband’s age <= 49
hsbge50 number of records with wife’s age >= 50
wflt20 number of records in sample with wife’s age < 20
wf2024 number of records with 20 <= wife’s age <= 24
wf2529 number of records with 25 <= wife’s age <= 29
wf3034 number of records with 30 <= wife’s age <= 34
wf3539 number of records with 35 <= wife’s age <= 39
wf4044 number of records with 40 <= wife’s age <= 44
wf4549 number of records with 45 <= wife’s age <= 49
wfge50 number of records with wife’s age >= 50
Source
National Center of Health Statistics (1987). TODO
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 440.
golfsrs Simple Random Sample of Golf Courses
Description
Simple Random Sample (SRS) of 120 golf courses taken from the population of the (now defunct)
Website www.golfcourse.com
Usage
golfsrs
Format
Data frame with the following 16 variables:
RN random number used to select golf course for sample
state state name
holes number of holes
type type of course; factor with levels priv (private), semi (semi-private), pub (public), mili
(military) and res (resort)
yearblt year the course was built
wkday18 greens fee for 18 holes during week
wkday9 greens fee for 9 holes during week
wkend18 greens fee for 18 holes on weekend
wkend9 greens fee for 9 holes on weekend
backtee back-tee yardage
rating course rating
par par for course
cart18 golf cart rental fee for 18 holes
cart9 golf cart rental fee for 9 holes
caddy Are caddies available? factor with levels yes and no
pro Is a golf pro available? factor with levels yes and no
Source
The now defunct website golfcourse.com (https://web.archive.org/web/19991108203827/
http://golfcourse.com/)
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and TODO.
htpop Height and gender of 2000 persons in an artificial population
Description
Height and gender of 2000 persons in an artificial population
Usage
htpop
Format
height height of person, cm
gender factor with levels F (female) and M (male)
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. 230–234 and 441.
htsrs Height and gender for an SRS of 200 persons, taken from htpop
Description
Height and gender for an SRS of 200 persons, taken from htpop
Usage
htsrs
Format
rn random number used to select the unit
height height of person, cm
gender factor with levels F (female) and M (male)
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. 230–234 and 442.
htstrat Height and gender for a stratified random sample from htpop
Description
Height and gender for a stratified random sample of 160 women and 40 men taken from the htpop
population
Usage
htstrat
Format
rn random number used to select the unit
height height of person, cm
gender factor with levels F (female) and M (male)
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. 230–234 and 442.
journal Types of Sampling Used for Articles in a Sample of Journals
Description
Types of Sampling Used for Articles in a Sample of Journals
Usage
journal
Format
Data frame with the following 3 variables:
numemp number of articles in 1988 that used sampling
prob number of articlues that used probability sampling
nonprob number of articles that used nonprobability sampling
Source
Jacoby and Handlin (1991). TODO
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 442.
lahiri.design Draw Samples Using Lahiri’s Method
Description
Draw Samples Using Lahiri’s Method
Usage
lahiri.design(relsize, n, clnames = seq(along = relsize))
Arguments
relsize vector of relative sizes of population PSUs
n desired sample size
clnames vector of PSU names for population
Value
clusters vector of n PSUs selected with replacement and with probability proportional to relsize
Note
Original code from Lohr (1999), p. 452 – 453.
Author(s)
<NAME>, slightly modified by <NAME>
References
<NAME>. (1951). A method of sample selection providing unbiased ratio estimates, Bulletin of
the International Statistical Institute, 33: 133 – 140.
measles Survey of Parents of Children Non-Immunized against Measles
Description
Roberts et al. (1995) report on the results of a survey of parents whose children had not been
immunized against measles during a recent campaign to immunize all children in the first five years
of secondary school.
Usage
measles
Format
Data frame with 11 variables. A parent who refused consent (variable 4) was asked why, with
responses in variables 5-10. A parent could give more than one reason for not having the child
immunized.
school school attended by child
form parent received consent form
returnf parent returned consent form
consent parent gave consent for measles immunization
hadmeas child had already had measles
previmm child had been immunized against measles
sideeff parent concerned about side effects
gp parent wanted GP (general practitioner) to give vaccine
noshot child did not want injection
notser parent thought measles not serious illness
gpadv GP advised that vaccine was not needed
Note
The original data were unavailable; univariate and multivariate summary statistics from these artifi-
cial data, however, are consistent with those in the paper.
Source
<NAME>. et al. (1995). Reasons for non-uptake of measles, mumps, and rubella catch up
immunisation in a measles epidemic and side effects of the vaccine, British Medical Journal, 310,
1629–1632.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 442.
ncvs Victimization Incidents in the July-December 1989 NCVS
Description
Selected variables for victimization incidents in the July-December 1989 NCVS. Note that some
variables were recoded from the original data file.
Usage
ncvs
Format
Data frame with the following seven variables:
wt incident weight
sex factor with levels male and female
violent violent crime? factor with levels no and yes
injury did the victim have injuries? factor with levels no and yes
medcare factor with levels yes if the victim received medical care and no otherwise
reppol was the incident reported to the police? factor with levels yes and no
numoff number of offenders involved in crime; factor with levels one, more (more than one) and
dontknow
Source
Incident-level concatenated file, NCS8864I, in NCJ-130915, U.S. Department of Justice 1991.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 443.
nybight Data Collected in the New York Bight
Description
Data collected in the New York Bight for June 1974 and June 1975 (Wilk et al. 1977)
Usage
nybight
Format
Data frame with the following 7 variables:
year year
stratum stratum membership, based on depth
catchnum number of fish caught during trawl
catchwt total weight (kg) of fish caught during trawl
numspp number of species of fish caught during trawl
depth depth of station (m)
temp surface temperature (degrees Celsius)
Note
Two of the original strata were combined because of insufficient sample sizes.
Source
Wilk, S.J. et al. (1977). Fishes and associated environmental data collected in New York bight,
June 1974 - June 1975. NOAA Technical Report NMFS SSRF-716. Washington, D.C: Government
Printing Office.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 443.
otters Otters Data
Description
Data on number of holts (dens) in Shetland, United Kingdom used in Kruuk et al. (1989). (Data
courtesy of Hans Kruuk).
Usage
otters
Format
Data frame with the following three variables:
section coastline section
habitat type of habitat (stratum)
holts number of holts
Source
Kruuk, H.A. et al. (1989). An estimate of numbers and habitat preferences of otters Lutra lutra in
Shetland, UK., Biological Conservation, 49: 241–254.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 443.
ozone Ozone Readings from Eskdalemuir, for 1994 and 1995
Description
Hourly ozone readings in parts per billion (ppb) from Eskdalemuir, Scotland, for 1994 and 1995
Usage
ozone
Format
Data frame with the following 25 variables:
date date (day/month/year)
GMT1 ozone reading at 1:00 GMT
GMT2 ozone reading at 2:00 GMT
GMT3 ozone reading at 3:00 GMT
GMT4 ozone reading at 4:00 GMT
GMT5 ozone reading at 5:00 GMT
GMT6 ozone reading at 6:00 GMT
GMT7 ozone reading at 7:00 GMT
GMT8 ozone reading at 8:00 GMT
GMT9 ozone reading at 9:00 GMT
GMT10 ozone reading at 10:00 GMT
GMT11 ozone reading at 11:00 GMT
GMT12 ozone reading at 12:00 GMT
GMT13 ozone reading at 13:00 GMT
GMT14 ozone reading at 14:00 GMT
GMT15 ozone reading at 15:00 GMT
GMT16 ozone reading at 16:00 GMT
GMT17 ozone reading at 17:00 GMT
GMT18 ozone reading at 18:00 GMT
GMT19 ozone reading at 19:00 GMT
GMT20 ozone reading at 20:00 GMT
GMT21 ozone reading at 21:00 GMT
GMT22 ozone reading at 22:00 GMT
GMT23 ozone reading at 23:00 GMT
GMT24 ozone reading at 24:00 GMT
Source
Air Quality Information Centre: retrieved from a now defunct URL (http://www.aeat.co.uk)
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 443.
samples Samples Dataset
Description
All possible SRSs that can be generated from the population in Example 2.1 of Lohr(1999).
Usage
samples
Format
Data frame with the following 10 variables:
snum sample number
unit1 first unit in sample
unit2 second unit in sample
unit3 third unit in sample
unit4 fourth unit in sample
value1 value for first unit in sample
value2 value for second unit in sample
value3 value for third unit in sample
value4 value for fourth unit in sample
that t hat, i.e. estimate of the population total based on the given sample
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. 26–27 and 444.
seals Breathing Holes of Seals
Description
Data on number of breathing holes found in sampled areas of Svalbard fjords, reconstructed from
summary statistics given in Lydersen and Ryg (1991)
Usage
seals
Format
Data frame with the following 2 variables:
zone zone number for sampled area
holes number of breathing holes Imjak found in area
Note
The data are used in Chapter 4, Exercise 11.
Source
<NAME>. and <NAME>. (1991). Evaluating breeding habitat and populations of ringed seals
Phoca hispida in Svalbard fjords, Polar Record, 27: 223–228.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 444.
selectrs Steps used in Selecting an SRS
Description
Steps used in selecting the simple random sample (SRS) in Example 2.4 of Lohr(1999).
Usage
selectrs
Format
Data frame with the following 5 variables:
a random number generated between 0 and 1
b ceiling(3048*RN), with RN the random number in column a
c distinct values in column b
d new values generated to replace duplicates in b
e final set of distinct values to be used in sample
Note
the set of indices in column e was used to select observations from agpop into dataset agsrs.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. 31–34 and 444.
statepop Unequal-Probability Sample of Counties in the US
Description
counties selected with probability proportional to 1992 population
Usage
statepop
Format
state state abbreviation
county county
landarea land area of country, 1990 (square miles)
popn population of county, 1992
phys number of physicians, 1990
farmpop farm population, 1990
numfarm number of farms, 1987
farmacre number of acres devoted to farming, 1987
veterans number of veterans, 1990
percviet percent of veterans from Vietnam era, 1990
Source
City and Counties Book, 1994
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. 190 – 192 and 444.
statepps Information on States
Description
Number of counties, land area, and population for the 50 states plus the District of Columbia
Usage
statepps
Format
Date frame with the following 7 variables:
state state name
counties number of counties in state
cumcount cumulative number of counties
landarea land area of state, 1990 (square miles)
cumland cumulative land area
popn population of state, 1992
cumpopn cumulative population
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 445.
syc Survey of Youth in Custody, 1987
Description
The 1987 Survey of Youth in Custody sampled juveniles and young adults in long-term, state-
operated juvenile institutions. Residents of facilities at the end of 1987 were interviewed about
family background, previous criminal history, and drug and alcohol use. Selected variables from
the survey are contained in the syc data frame.
Usage
syc
Format
stratum stratum number
psu psu (facility) number
psusize number of eligible residents in psu
initwt initial weight
finalwt final weight
randgrp random group number
age age of resident
race race of resident: factor with levels 1 (white), 2 (black), 3 (Asian/Pacific Islander), 4 (American
Indian, Aleut, Eskimo), 5 (other)
ethnicty ethnicity; factor with levels hispanic and notHispanic
educ highest grade before sent to correctional institution; factor with levels 0 (never attended),
1-12 (highest grade attended), 13 (GED), 14 (other)
sex factor with levels male and female
livewith factor with levels 1 (mother only), 2 (father only), 3 (both mother and father), 4 (grand-
parents), 5 (other relatives), 6 (friends), 7 (foster home), 8 (agency or institution), 9 (someone
else)
famtime Has anyone in your family, such as your mother, father, brother, sister, ever served time
in jail or prison? factor with levels yes and no
crimtype most serious crime in current offense; one of violent (e.g. murder, rape, robbery, as-
sault), property (e.g. burglary, larceny, arson, fraud, motor vehicle theft), drug (drug pos-
session or trafficking), publicorder (weapons violation, perjury, failure to appear in court),
juvenile (juvenile-status offense, e.g. truancy, running away, incorrigible behavior)
everviol Ever put on probation or sent to correctional institution for violent offense? factor with
levels no and yes
numarr number of times arrested (integer)
probtn number of times on probation
corrinst number of times previously committed to correctional institution
evertime Prior to being sent here, did you ever serve time in a correctional institution? factor with
levels yes and no
prviol previously arrested for violent offense; factor with levels no and yes
prprop previously arrested for property offense; factor with levels no and yes
prdrug previously arrested for drug offense; factor with levels no and yes
prpub previously arrested for public-order offense; factor with levels no and yes
prjuv previously arrested for juvenile-status offense; factor with levels no and yes
agefirst age first arrested (integer)
usewepn Did you use a weapon... for this incident? factor with levels yes and no
alcuse Did you drink alcohol at all during the year before being sent here this time? factor with
levels yes, noduringyear, noatall
everdrug Ever used illegal drugs? factor with levels no, yes
Source
Inter-University Consortium on Political and Social Research, NCJ-130915, U.S. Department of
Justice 1989.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. 235–239 and 445.
teachers Elementary School Teacher Workload Data
Description
Selected variables from a study on elementary school teacher workload in Maricopa County, Ari-
zona.
Usage
teachers
Format
data frame with the following 6 variables:
dist school district size; factor with levels large and me/sm (medium/small)
school school identifier
hrwork number of hours required to work at school per week
size class size
preprmin minutes spent per week in school on preparation
assist minutes per week that a teacher’s aide works with the teacher in the classroom
Note
The study is described in Exercise 16 of Chapter 15. The psu sizes are given in teachmi. The large
stratum had 245 schools; the small/medium stratum had 66 schools.
Source
Data courtesy of <NAME> (1995).
References
<NAME>. (1995). Teacher load in Arizona elementary school districts in Maricopa County. Ph.D.
diss., Arizona State University
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 446.
teachmi Cluster Sizes for Elementary School Teacher Workload Data
Description
Cluster sizes for the study on elementary school teacher workload in Maricopa County, Arizona.
Usage
teachmi
Format
data frame with the following 6 variables:
dist school district size; factor with levels large and me/sm (medium/small)
school school identifier
popteach number of teachers in that school
ssteach number of surveys returned from that school
Note
The study is described in Exercise 16 of Chapter 15. The actual date are given in teachers.
Source
Data courtesy of <NAME> (1995).
References
<NAME>. (1995). Teacher load in Arizona elementary school districts in Maricopa County. Ph.D.
diss., Arizona State University
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 446.
teachnr Follow-Up Study of Nonrespondents from Gnap (1995)
Description
Follow-up study of nonrespondents from the Gnap (1995) study on the workload of elementary
school teachers in Maricopa County, Arizona.
Usage
teachnr
Format
data frame with the following 6 variables:
hrwork number of hours required to work at school per week
size class size
preprmin minutes spent per week in school on preparation
assist minutes per week that a teacher’s aide works with the teacher in the classroom
Note
The study is described in Exercise 16 of Chapter 15. The actual date are given in teachers. Cluster
size data for the original study are given in teachmi.
Source
Data courtesy of <NAME> (1995).
References
<NAME>. (1995). Teacher load in Arizona elementary school districts in Maricopa County. Ph.D.
diss., Arizona State University
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 446.
winter ASU Winter Closure Survey
Description
Selected variables from the Arizona State University Winter Closure Survey, taken in January 1995.
This survey was taken to investigate the attitudes and opinions of university employees toward the
closing of the university between December 25 and January 1.
Usage
winter
Format
data frame with the following 6 variables:
class stratum number; factor with levels faculty, classstaff (classified staff), admstaff (ad-
ministrative staff) and acprof (academic professional)
yearasu factor with levels 1 (1-2 years), 2 (3-4 years), 3 (5-9 years), 4 (10-14 years) and 5 (15 or
more years)
vacation In the past, have you usually taken vacation days in the entire period between December
25 and January 1? factor with levels no and yes
work Did you work on campus during Winter Break Closure? factor with levels no and yes
havediff Did the Winter Break Closure cause you any difficulty/concerns? factor with levels no
and yes
negaeffe Did the Winter Break Closure negatively affect your work productivity? factor with levels
no and yes
ownsupp I was unable to obtain staff support in my department/office. factor with levels yes and
no
othersup I was unable to obtain staff support in other departments/offices. factor with levels yes
and no
utility I was unable to access computers, copy machine, etc. in my department/office. factor with
levels yes and no
environ I was unable to endure environmental conditions - e.g., not properly climatized. factor
with levels yes and no
uniserve I was unable to access university services necessary to my work; factor with levels yes
and no
workelse I was unable to work on my assignments because I work in another department/office;
factor with levels yes and no
offclose I was unable to work on my assignments because my office was closed; factor with levels
yes and no
treatsta compared to other departments/offices, I feel staff in my department/office were treated
fairly; factor with levels strongagr (strongly agree), agree, undecided, disagree, strdisagr
(strongly disagree)
treatme compared to other people working in my department/office, I feel I was treated fairly;
factor with levels strongagr (strongly agree), agree, undecided, disagree, strdisagr
(strongly disagree)
process How satisfied are you with the process used to inform staff about Winter Closure? factor
with levels verysat (very satisfied), satisfied, undecided, dissatisfied and verydissat
(very dissatisfied)
satbreak How satisfied are you with the fact that ASU had a Winter Break Closure this year? factor
with levels verysat (very satisfied), satisfied, undecided, dissatisfied and verydissat
(very dissatisfied)
breakaga Would you want to have Winter Break Closure again? factor with levels no and yes
Source
courtesy of the ASU Office of University Evaluation.
References
Lohr (1999). Sampling: Design and Analysis, Duxbury, p. TODO and 447–448. |
r2d3 | cran | R | Package ‘r2d3’
October 14, 2022
Type Package
Title Interface to 'D3' Visualizations
Version 0.2.6
Description Suite of tools for using 'D3', a library for producing dynamic, interactive data
visualizations. Supports translating objects into 'D3' friendly data structures, rendering
'D3' scripts, publishing 'D3' visualizations, incorporating 'D3' in R Markdown, creating
interactive 'D3' applications with Shiny, and distributing 'D3' based 'htmlwidgets' in R
packages.
License BSD_3_clause + file LICENSE
Encoding UTF-8
Depends R (>= 3.1.2)
Imports htmlwidgets (>= 1.2), htmltools, jsonlite, rstudioapi
Suggests knitr, rmarkdown, R6, shiny, shinytest, testthat, webshot
RoxygenNote 7.1.2
URL https://rstudio.github.io/r2d3/, https://github.com/rstudio/r2d3
BugReports https://github.com/rstudio/r2d3/issues
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [aut],
<NAME> [aut],
<NAME> [ctb, cph] (d3.js library, https://d3js.org),
RStudio [cph]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-02-28 16:50:02 UTC
R topics documented:
as_d3_dat... 2
d3-shin... 3
html_dependencies_d... 3
r2d... 4
save_d3_htm... 5
save_d3_pn... 6
as_d3_data Convert object to D3 data
Description
Generic method to transform R objects into D3 friendly data.
Usage
as_d3_data(x, ...)
## Default S3 method:
as_d3_data(x, ...)
Arguments
x data
... Additional arguments for generic methods
Details
The value returned from as_d3_data() should be one of:
• An R data frame. In this case the HTMLWidgets.dataframeToD3() JavaScript function will
be called on the client to transform the data into D3 friendly (row-oriented) data; or
• A JSON object created using jsonlite::toJSON; or
• Any other R object which can be coverted to JSON using jsonlite::toJSON.
d3-shiny Shiny bindings for d3
Description
Output and render functions for using d3 within Shiny applications and interactive Rmd documents.
Usage
d3Output(outputId, width = "100%", height = "400px")
renderD3(expr, env = parent.frame(), quoted = FALSE)
Arguments
outputId output variable to read from
width, height Must be a valid CSS unit (like '100%', '400px', 'auto') or a number, which
will be coerced to a string and have 'px' appended.
expr An expression that generates a d3
env The environment in which to evaluate expr.
quoted Is expr a quoted expression (with quote())? This is useful if you want to save
an expression in a variable.
html_dependencies_d3 D3 HTML dependencies
Description
Create HTML dependencies for D3 and optional extensions
Usage
html_dependencies_d3(version = c("6", "5", "4", "3"), extensions = NULL)
Arguments
version Major version of D3
extensions D3 extensions to include. Currently the only supported extension is "jetpack"
(https://github.com/gka/d3-jetpack).
Details
Create list of HTML dependencies for D3. Each version has a distinct root D3 object so it’s possible
to combine multiple versions of D3 on a single page. For example, D3 v5 is accessed via d3v5 and
D3 v4 is accessed via d3v4. Note however that D3 v3 is accessed via simply d3 (for compabibilty
with existing htmlwidgets that use this form).
Note
This function is exported for use by htmlwidgets. If you are using the r2d3() function to include
D3 code within a document or application this dependency is included automatically so calling this
function is unnecessary.
Examples
library(r2d3)
r2d3(
data = c (0.3, 0.6, 0.8, 0.95, 0.40, 0.20),
script = system.file("examples/barchart.js", package = "r2d3"),
dependencies = "d3-jetpack"
)
r2d3 D3 visualization
Description
Visualize data using a custom D3 visualization script
Usage
r2d3(
data,
script,
css = "auto",
dependencies = NULL,
options = NULL,
d3_version = c("6", "5", "4", "3"),
container = "svg",
elementId = NULL,
width = NULL,
height = NULL,
sizing = default_sizing(),
viewer = c("internal", "external", "browser")
)
Arguments
data Data to be passed to D3 script.
script JavaScript file containing the D3 script.
css CSS file containing styles. The default value "auto" will use any CSS file located
alongside the script file with the same stem (e.g. "barplot.css" would be used for
"barplot.js") as well as any CSS file with the name "styles.css".
dependencies Additional HTML dependencies. These can take the form of paths to JavaScript
or CSS files, or alternatively can be fully specified dependencies created with
htmltools::htmlDependency.
options Options to be passed to D3 script.
d3_version Major D3 version to use, the latest minor version is automatically picked.
container The ’HTML’ container of the D3 output.
elementId Use an explicit element ID for the widget (rather than an automatically generated
one). Useful if you have other JavaScript that needs to explicitly discover and
interact with a specific widget instance.
width Desired width for output widget.
height Desired height for output widget.
sizing Widget sizing policy (see htmlwidgets::sizingPolicy).
viewer "internal" to use the RStudio internal viewer pane for output; "external" to dis-
play in an external RStudio window; "browser" to display in an external browser.
Details
In order to scope CSS styles when multiple widgets are rendered, the Shadow DOM and the we-
components polyfill is used, this feature can be turned off by setting the r2d3.shadow option to
FALSE.
Examples
library(r2d3)
r2d3(
data = c (0.3, 0.6, 0.8, 0.95, 0.40, 0.20),
script = system.file("examples/barchart.js", package = "r2d3")
)
save_d3_html Save a D3 visualization as HTML
Description
Save a D3 visualization to an HTML file (e.g. for sharing with others).
Usage
save_d3_html(
d3,
file,
selfcontained = TRUE,
libdir = NULL,
background = "white",
title = "D3 Visualization",
knitrOptions = list()
)
Arguments
d3 D3 visualization to save
file File to save HTML into
selfcontained Whether to save the HTML as a single self-contained file (with external re-
sources base64 encoded) or a file with external resources placed in an adjacent
directory.
libdir Directory to copy HTML dependencies into (defaults to filename_files).
background Text string giving the html background color of the widget. Defaults to white.
title Text to use as the title of the generated page.
knitrOptions A list of knitr chunk options.
Details
Using selfcontained set to TRUE requires pandoc to be installed.
See Also
save_d3_png()
Examples
library(r2d3)
viz <- r2d3(
data = c(0.3, 0.6, 0.8, 0.95, 0.40, 0.20),
script = system.file("examples/barchart.js", package = "r2d3")
)
save_d3_html(
viz,
file = tempfile(fileext = ".html"),
selfcontained = FALSE
)
save_d3_png Save a D3 visualization as a PNG image
Description
Save a D3 visualization to PNG (e.g. for including in another document).
Usage
save_d3_png(
d3,
file,
background = "white",
width = 992,
height = 744,
delay = 0.2,
zoom = 1
)
Arguments
d3 D3 visualization to save
file File to save HTML into
background Text string giving the html background color of the widget. Defaults to white.
width Image width
height Image height
delay Time to wait before taking screenshot, in seconds. Sometimes a longer delay is
needed for all assets to display properly.
zoom A number specifying the zoom factor. A zoom factor of 2 will result in twice as
many pixels vertically and horizontally. Note that using 2 is not exactly the same
as taking a screenshot on a HiDPI (Retina) device: it is like increasing the zoom
to 200 doubling the height and width of the browser window. This differs from
using a HiDPI device because some web pages load different, higher-resolution
images when they know they will be displayed on a HiDPI device (but using
zoom will not report that there is a HiDPI device).
Details
PNG versions of D3 visualizations are created by displaying them in an offscreen web browser and
taking a screenshot of the rendered web page.
Using the save_d3_png() function requires that you install the webshot package, as well as the
phantom.js headless browser (which you can install using the function webshot::install_phantomjs()).
See Also
save_d3_html() |
plug_and_play | hex | Erlang | plug\_and\_play v0.7.0
API Reference
===
Modules
---
[Mix.Tasks.Server](Mix.Tasks.Server.html)
[PlugAndPlay](PlugAndPlay.html)
[PlugAndPlay.Application](PlugAndPlay.Application.html)
[PlugAndPlay.Router](PlugAndPlay.Router.html)
[PlugAndPlay.Supervisor](PlugAndPlay.Supervisor.html)
plug\_and\_play v0.7.0
Mix.Tasks.Server
===
Summary
===
[Functions](#functions)
---
[run()](#run/1)
A task needs to implement `run` which receives a list of command line args
Functions
===
run()
A task needs to implement `run` which receives a list of command line args.
Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1).
plug\_and\_play v0.7.0
PlugAndPlay
===
plug\_and\_play v0.7.0
PlugAndPlay.Application
===
plug\_and\_play v0.7.0
PlugAndPlay.Router
===
plug\_and\_play v0.7.0
PlugAndPlay.Supervisor
===
Summary
===
[Functions](#functions)
---
[child\_spec(router, port \\ nil)](#child_spec/2)
[start\_link(router, port \\ nil)](#start_link/2)
Functions
===
child\_spec(router, port \\ nil)
start\_link(router, port \\ nil) |
php_punctsivirgula_ro | free_programming_book | PHP | ## Ghid PHP pentru incepatori 5
### Ce este acest ghid?
Acesta ghid este un manual de initiere in programarea web folosind limbajul PHP. Scopul acestui ghid este de a oferi informatii complete ce pot fi folosite de oricine doreste sa invete sa dezvolte aplicatii web utilizand PHP. Manualul este in totalitate in limba romana, cuprinde notiuni legate de limbajul PHP, explicatii, exemple practice si aplicatii si acopera intregul ciclu de invatare, de la nivelul elementar pana la cel avansat.
Asadar, acest ghid reprezinta o resursa completa in invatarea limbajului PHP. Parcurgand acest curs veti putea dobandi notiunile necesare intelegerii si crearii scripturilor PHP de orice nivel de complexitate.
### Cui se adreseaza acest ghid?
Acest ghid se adreseaza persoanelor care doresc sa se initieze in programarea web. Este important de mentionat ca in acest manual for fi tratate strict problemele legate de programare. Elementele de design (HTML, CSS, etc), baze de date, sau alte tehnologii web vor fi tratate punctual, doar acolo unde explicarea lor este necesara in intelegerea conceptelor de programare expuse. Asadar, acest ghid va fi util doar persoanelor care sunt interesate in mod direct de programare si de dezvoltarea aplicatiilor web dinamice.
### Cum este structurat ghidul?
Acest ghid imbraca forma unui tutorial web si cuprinde mai multe lectii grupate in functie de dificultate:
* partea I cuprinde lectiile introductive in care se explica pe larg ce este si cum functioneaza PHP
* partea a II-a cuprinde lectiile despre elementele de baza ale limbajului PHP (sintaxa, elemente de limbaj, instructiuni uzuale, etc) precum si functionalitatile limbajului (lucrul cu formulare, cu data si timpul, etc)
* partea a III-a cuprinde lectiile despre facilitatile avansate ale limbajului PHP
* partea a IV-a cuprinde aplicatii practice mai complexe, care constau intr-o suita de script-uri PHP.
* partea a V-a cuprinde cateva pagini cu informatii aditionale, nu in mod necesar legate de programarea in PHP, dar care pot fi utile oricarui dezvoltator web.
Fiecare lectie contine, de regula, pe langa notiunile prezentate, explicatii si exemple de cod-sursa.
### Cu ce incep? Si mai pe urma?
Ideal, cu prima lectie. Daca nu ati mai avut contact cu programarea web, este foarte important sa intelegi mecanismul de functionare, modul de deservire a paginilor si toate detaliile de care va veti lovi mereu ca programator web. Nu "sariti" nici o lectie din partea I. Daca ati lucrat deja cu o alta tehnologie web si vreti sa va familiarizati cu PHP, atunci puteti incepe cu partea a II-a.
Nu este nevoie sa parcurgeti intregul tutorial. Doar atat cat este nevoie sa intelegeti cum stau lucrurile si sa va simtiti pregatiti pentru a incepe singuri un micro-proiect. Acesta este de fapt si scopul acestui site: sa va ofere informatii de baza si mici exemple pentru ca voi sa incepeti singuri lucrul la un proiect pe parcursul caruia sa dobanditi cat mai multe cunostinte.
De altfel, nu este nici suficient sa parcurgeti acest site in totalitate. Si nici o carte de specialitate. Pana nu incepeti sa lucrati efectiv la ceva, sa scrieti cod PHP, nu veti reusi sa invatati cu adevarat.
Asadar, cititi explicatiile, cititi comentariile adaugate de alte persoane, incercati singuri toate exemplele, dar nu zaboviti prea mult. Ganditi-va la un proiect bun pentru inceput, la care sa incepeti sa lucrati singuri: un site de administrare link-uri, o librarie de melodii/poze, un guest-book. Ceva simplu pe care sa-l puteti si termina. Nu va ganditi ca lucrurile astea au fost facute deja de alte persoane; este important sa fie facute si de voi, ca sa puteti "prinde gustul" programarii in PHP.
### Ce trebuie sa stiu ca sa invat PHP?
Pentru a putea parcurge cu usurinta acest ghid si pentru a intelege notiunile prezentate este necesar sa detineti urmatoarele cunostinte:
* cunostine minime de programare. Daca nu intelegeti termeni ca variabila, instructiune, compilare, atunci probabil ar trebui sa cititi mai intai o carte pentru initiere in bazele programarii.
* notiuni elementare de HTML (ce inseamna HTML, structura unui document HTML, cum este afisat in browser fata de ce este scris in sursa, etichetele uzuale)
* notiuni elementare despre internet (cum functioneaza) si serviciile de internet (www, transfer de fisiere, etc)
Nu este nevoie sa aveti experienta cu dezvoltarea web sau sa stiti lucruri de detaliu despre tehnologiile web. Tot ce este necesar sa stiti va fi explicat pe indelete in acest tutorial.
### Ce imi trebuie pentru a lucra in PHP?
Pentru a putea incepe sa programati in limbajul PHP, aveti nevoie de urmatoarele:
* un server web ce dispune de interpretorul PHP
Cel mai simplu este sa va instalati singuri un web-server, pe calculatorul personal. Detalii in lectia "Instalare PHP" * de un editor text. Detalii pe pagina "Editoare PHP"
* sa salvati toate scripturile pe care le creati in locatia speciala numita Document Root
Asta este doar un avertisment, pentru a va atrage atentia ca este important ca fisierele sa fie puse intr-un anumit director de unde sa poata fi preluate de serverul web (mai multe detalii in lectia de Instalare PHP) * de un browser web (Firefox, Internet Explorer, Chrome, Opera, Safari, etc)
* de un calculator (laptop sau de birou) link afiliat
### Unde gasesc alte informatii, cursuri sau tutoriale PHP?
Alte resurse de PHP sunt disponibile la urmatoarele locatii:
Multumesc pentru cursul asta! E absolut excelent! M-ati scos din mizerie :)Aveti ceva si despre MySql si baze de date? Nu ma descurc deloc cu configurarile so conectarea.
se poate adauga si poze de tutorial
Scopul acestui site este de a-i face pe oameni sa incerce singuri sa programeze in PHP chiar de la primele lectii. Doar asa te poti lovi de situatii care te determina sa perseverezi si sa aprofundezi subiectele.O poza/imagine care iti ofera totul pe tava nu este pentru cei care vor sa invete (poate doar pentru cei care vor sa "citeasca" despre PHP si atat). In orice caz, pe site exista niste link-uri numite "Testeaza" care fac trimitere la un site ce permite vizualizarea in timp real a rezultatului executiei codului PHP. Mai mult, acolo poti si incerca variante ale codului sursa. Imi place cum e structurat, si explicatiile sunt bine punctate!Felicitari pentru efort! Nu adaugati poze sau imagini, deoarece sunt persoane care stiu sa citeasca in orice carte; iti spun ce scrie dupa poze!
E complet tutorialul
Date: 2010-04-25
Categories:
Tags:
## Notiuni elementare 7
### Ce inseamna server?
Toata lumea stie, probabil, ce inseamna Internet: acea colectie uriasa de echipamente legate intre ele in scopul schimbului de informatii sau al furnizarii de servicii. Daca ar fi sa simplificam mult ideea de internet, am putea considera ca acesta este o retea de calculatoare in care fiecare nod (calculator) gazduieste informatii sau servicii ce pot fi accesate de publicul larg. Aceste calculatoare din internet poarta numele de servere.
In sens larg, un server este un dispozitiv (combinatie de hardware si software) care ofera servicii si/sau informatii utilizatorilor (clientilor).
Odata cu notiunea de server apare si cea de arhitectura client-server, care se refera la un ansamblu format dintr-un dispozitiv server (furnizor de informatii) si un dispozitiv (calculator) client, conectate prin intermediul unei retele si care fac schimb de informatii. Cel mai comun exemplu al acestui ansamblu este dat de functionalitatea de partajare a fisierelor in retea (File Sharing). Spre exemplu, un calculator contine un folder partajat (share-uit) si un alt calculator copiaza folderul prin retea. In aceasta situatie, calculatorul care ofera folderul are functia de server iar cel care preia folderul - pe cea de client. Rolul de client/server nu este statornic in acest scenariu, oricare din cele doua calculatoare putand oricand sa detine oricare din functii. Asadar notiunea de server se aplica in momentul in care dispozitivul din retea (calculatorul) ofera informatii sau servicii.
### Server web
Un tip particular de server, este server-ul web. Un server web este un sistem care gazduieste si ofera pagini web prin intermediul unei retele. De cele mai multe ori, termenul de server web desemneaza a aplicatie, un program care ruleaza pe calculatorul-server si care este responsabil cu primirea cerintelor de la utilizatori si transferarea paginilor web catre ei.
### Ce inseamna site?
Paginile stocate pe un calculator-server si oferite publicului larg sunt grupate sub denumirea generica de site. Un site (denumit si website) este, asadar, o colectie de pagini web, interconectate, stocate pe un server web.
### Deservirea paginilor
In momentul in care un server web primeste o cerinta de la un utilizator pentru o pagina, se verifica mai intai existenta acesteia. Daca pagina exista fizic pe server atunci este transmisa catre utilizator. Paginile returnate de un server web sunt de obicei in format HTML. Navigatoarele web (browserele) sunt capabile sa interpreteze codul HTML si sa afiseze informatiile intr-un mod usor de citit.
In imaginea de mai jos este reprezentata o comunicare tipica intre un server web si un client.
In lumea reala, paginile deservite de un server web sunt de cele mai multe ori modificate inainte de a fi transmise catre clienti. Exista situatii cand paginile cerute nici nu exista fizic pe calculatorul server si cu toate astea ele sunt construite si deservite la cerere.
Acest lucru este posibil gratie unor module sau aplicatii aditionale care functioneaza impreuna cu aplicatia de server web. Unul din aceste module este PHP.
Am citit cate ceva din acest tutorial.Intrebarea mea este: <strong>De ce nu un curs de programare?</strong> Poate chiar scheme logice si pseudocod. Ar fi bine venit. Am cautat referinte pe net despre dumneata si am vazut ca prin 2005 erai admis la facultate.:) Este util acest tutorial, are ambitii mari, dar unele chestiuni esentiale sunt tratate superficial si/sau incorect.
As vrea sa stiu si eu unde creez o baza de date in easy php caci serverul imi functioneaza dar nu stiu unde sa incep o baza de date in acest program si o baza de date are vre-o terminatie specifica sau se termina tot cu .php. Scuzati-ma de astfel de intrebari nu sunt un batut in cap doar ca sunt nou in asfel de chestii si tot ceea ce fac fac cu ajutorul net-ului.Poate stiti un curs easyphp expre pentru baze de date. Ms anticipat Ionut, Suceava
Pana acum m-am jucat prin html si m-am decis sa invat cat mai multe despre php, am dat de tutorialul tau si pot spune ca este foarte folositor! Tine-o tot asa!
"tutancamo" spune-mi si mie ce facultate a facut... ca a meritat ))))))... vreau si eu :)
nu pot sa intru pe explorer ca mi s-a trimis intr-o fereastra server ocupat.Poate imi explicati.Multumesc
Vreau sa mă joc pe un joc si nu ma lasa îmi tot spune ca a axpirat pagina web îmi spuneti si mie ce sa fac
Date: 2009-08-30
Categories:
Tags:
## PHP si programarea server-side 7
Pe scurt, ce este PHP? Ca idee generala, PHP reprezinta un limbaj de programare ce permite modificarea paginilor web inainte ca acestea sa fie transmise de server catre browserele utilizatorilor.
PHP poate genera continut HTML pe baza unor fisiere existente sau pornind de la zero, poate sa afiseze o imagine sau orice alt continut accesibil prin web, sau sa redirectioneze utilizatorul catre alte pagini. In cadrul acestui proces, PHP poate consulta baze de date, fisiere externe sau alte resurse, poate trimite email-uri sau executa comenzi ale sistemului de operare. Intrucat procesarea se realizeaza la nivelul serverului web, inainte ca paginile web sa ajunga in browser, PHP este considerat un limbaj de programare server-side.
Modul in care PHP genereaza continut pentru o pagina ce va fi afisata de browser este prin instructiunile delimitate de etichetele <?php si ?>. Astfel, orice se afla intre aceste tag-uri va fi considerat cod-sursa si va fi executat de interpretorul PHP si inlocuit cu rezultatul executiei. Ce este in afara tag-urilor ramane neschimbat si este transmis catre browser. Spre exemplu, avem o pagina ca mai jos.
`Azi e` Rezultatul va fi o pagina HTML ce va contine cuvintele "Azi e" si rezultatul codului PHP (in cazul acesta - data curenta). Codul HTML final, transmis de server in urma procesarii PHP este urmatorul: `Azi e 12.11.2023`
Codul PHP nu trebuie sa fie neaparat intercalat in pagina HTML. Secventa de mai jos produce o pagina similara celei de mai sus (in acest caz PHP produce o pagina HTML de la zero).
```
';
print 'Azi e' . date( 'd.m.Y' );
print '';
?>
```
Nota: interpretorul PHP nu este instalat implicit pe orice calculator. Pentru ca toate exemplele de pe site sa functioneze, trebuie sa instalati (manual) un interpretor. Vedeti sectiunea "Instalare PHP" pentru detalii.
si nika nu inteleg.
in primul caz <html><body><?php..., nu mergein al doilea caz <?php print..., merge ce nu este in regula? Alex, mie imi merge bine in ambele situatii. Verifica daca ai copiat corect codul.Este posibil sa iti dea eroare si sa nu fie afisata. Din cate stiu eu, EasyPHP este configurat sa ascunda erorile de PHP. Poti da click-dreapta pe iconita de EasyPHP de langa ceas si sa alegi Configuration -> PHP. O sa ti se deschida un Notepad si pe undeva pe la linia 520 ar trebui sa ai o linie de forma error_reporting = [ceva] Acel ceva trebuie sa fie E_ALL (daca nu e pune-l tu), astfel incat sa ai error_reporting = E_ALL Atentie, NU trebuie sa fie punct si virgula (;) la inceputul randului, pentru ca asta inseamna ca linia este un comentariu si va fi ignorata. Putin mai jos mai este inca o linie la care trebuie sa te uiti: display_errors = On La fel, nu trebuie sa fie comentata linia printr-un ; iar valoarea trebuie sa fie "On". Daca ti-a mers pana la urma nu cred ca are rost sa te bagi prin configurare la momentul asta. Daca insa esti curios poti citi comentariile din acel fisier (php.ini) - are explicatii pentru fiecare directiva de acolo.
Nu va merge deoarece fisierul trebuie sa aiba terminatia .php verificati asta sau nu aveti versiunea potrivita de php :D luati xampp.
Vezi ca fisieree trebuie sa fie puse in directorul htdocs daca folosesti xamp,www daca folosesti appserv(la fel si pentru wamp)
Este normal ca atunci cand deschid LocalHostul si " Rulez " fisierul .php sa imi arate si restul codului scris cu tot cu functii,nu doar Rezultatul final ?
Am 14 ani si am citit despre PHP ca sunt interesat si vrau sa invat si am reusit pe site-ul meu faza cu Azi e.... :)Multumesc...
Adauga un comentariu la aceasta sectiune.
## Programare web 1
PHP este un limbaj de programare de tip interpretat. Asta inseamna ca fisierele ce contin cod-sursa PHP sunt interpretate ca atare in momentul executiei, de catre PHP. Asadar, pentru executia unei portiuni de cod PHP este folosit codul-sursa asa cum a fost scris el, si nu este transformat intr-o forma intermediara (binara sau cod-masina) cum se intampla la Java sau C/C++. Acest lucru ofera flexibilitate, intrucat orice modificare a fisierelor sursa va fi aplicata imediat la urmatoarea executie, fara alti pasi intermediari. Exista si dezavantaje la acest mod de lucru, cum ar fi timp mai mare de executie a codului, dar in anumite situatii avantajele pot cantari mai mult decat dezavantajele. Datorita faptului ca limbajul este unul interpretat, PHP mai este numit si limbaj de scripting.
In sens mai larg, PHP este un limbaj de programare universal (sau general-purpose), oferind toate facilitatile oricarui limbaj avansat. Codul scris in PHP poate face aproape aceleasi lucruri ca un cod de C/C++ sau Java. Cu toate astea, PHP s-a impus in zona web, ca limbaj server-side, ce extinde functionalitatea serverelor web. Din acest motiv programarea in PHP mai este denumita si programare web sau programare web server-side.
In acest ghid ne vom axa pe programarea in PHP ca limbaj server-side. Desi notiunile prezentate nu sunt legate de un mod de lucru anume si majoritatea exemplelor pot fi executate si din linie de comanda, vom presupune ca PHP va fi folosit doar pentru programarea web, ca modul al unui server web.
In cele de urmeaza va fi explicat mecanismul de functionare a interpretorului PHP si cum intervine el in procesul de furnizare a paginilor web.
Felicitari pt. metodica expunerii !
## PHP si paginile dinamice 7
Interpretorul PHP actioneaza ca o componenta aditionala, o extensie a serverului web care este invocata de ori cate ori o pagina PHP este accesata. Aceasta componenta proceseaza codul-sursa din pagina si apoi transmite rezultatul inapoi la web-server, ajungand in final in browserele utilizatorilor. Acest proces este prezentat in imaginea din dreapta de mai jos.
Deservirea unei pagini dinamice, modificata de PHP in momentul request-ului
### Static si dinamic
Din imaginile de mai sus, se observa ca atunci cand nu exista un interpretor PHP, paginile sunt transmise direct catre utilizatori asa cum sunt salvate pe disc, fara modificari. Pentru a actualiza continutul acestora, este nevoie de interventie directa asupra lor si salvarea modificarilor pe server. Aceste pagini sunt denumite "pagini statice".
Spre exemplu, presupunand ca avem o pagina statica ce afiseaza membrii unei comunitati, la fiecare inscriere a unei noi persoane, pagina ar trebui modificata manual de catre cineva cu acces la serverul web. Lucrurile se complica daca acea lista este personalizata, cu trimiteri catre alte informatii (cum ar fi detalii de contact pentru fiecare, etc) sau cu un design intortocheat. Toate aceste probleme pot fi rezolvate cu ajutorul PHP.
Folosind o secventa de cod PHP am putea prelua lista de membri dintr-o baza de date, eliminand problema actualizarii - nu va mai fi nevoie sa se modifice pagina odata cu fiecare membru nou, scriptul PHP va afisa in mod automat noile persoane adaugate in baza de date. Este rezolvata si problema linkurilor personalizate, sau a designului - toate elementele specifice unei persoane pot fi generate in mod automat.
Aceste pagini sunt, asadar, modificate de catre PHP la momentul accesarii lor de catre utilizatori. In functie de parametrii primiti si de secventa de cod definita de programator, aceasi pagina poate avea continut diferit. Aceasta proprietate este denumita dinamism, iar o astfel de pagina este considerata pagina dinamica.
buna,ma puteti ajuta va rog cu o situatie eu mi-am facut un site cu cateva pagini html si acum as dori sa adaug si php, trebuie sa refac toate paginile dar sa aiba extensia php sau pot sa am fisiere php diferite care sa se lece cu cele html existente? multumesc! Da, intr-un site pot exista si fisiere html si php.Cele html vor fi servite asa cum sunt, statice, iar cele php vor fi interpretate si rezultatul lor va fi transmis.
Sunt nou in php dar, din ce am mai aflat e ca dintr-un html poti face php, din php in html nu se poate.Deci se pot face toate paginile care le ai php, si intervii in liniile de cod html unde ai nevoie.Sa ma corectati daca gresesc.Apropo foarte bun situl, keep up the good work, Alex!
Da, ai dreptate, un fisier cu extensia PHP poate contine cod HTML si pe alocuri secvente de cod PHP. Depinde de necesitatile si de stilul fiecarui programator web.Multumesc pentru urari! Sper ca lectiile de pe site sa iti fie de folos.
Foarte folositoare lectiile pentru cei ce se apuca de invatat php
Invatati php de unii singuri
felicitari pt acest site! Imi doream de mult sa invat php5
Date: 2009-03-05
Categories:
Tags:
## Pregatirea calculatorului pentru lucrul cu PHP. Instalare PHP 52
Cel mai simplu mod de a lucra cu PHP este sa fie instalat pe calculatorul personal. Este nevoie, asadar, ca propriul calculator sa devina un server web mai intai. Acest lucru este posibil instaland o aplicatie capabila sa accepte cerinte si sa transmita pagini web ca raspuns. O astfel de aplicatie este Apache HTTP Server. La aceasta aplicatie, vom "atasa" apoi interpretorul PHP care va interveni in procesul de servire a paginilor web.
Nota: daca nu este clar modul de functionare a interpretorului PHP, consultati pagina Ce inseamna PHP? care descrie cum intervine el in procesul de furnizare a paginilor web.
### Instalarea interpretorului PHP pe GNU/Linux
Daca utilizati un sistem de operare GNU/Linux este posibil sa aveti deja instalate atat aplicatia de server web (Apache HTTP Server, sau httpd) cat si modulul PHP, in functie de configurarea aleasa la instalarea sistemului de operare. Daca acestea nu sunt deja instalate, le puteti adauga cu usurinta fie din linia de comanda, fie din managerul de aplicatii al interfetei grafice folosite (unde apar de obicei in categoria development). Intrucat exista diferente notabile de la o distributie la alta, nu voi detalia procesul de instalare a PHP pe Linux.
### Instalarea interpretorului PHP pe Windows
Pentru inceput, cea mai simpla modalitate de a avea totul pregatit pentru a lucra cu PHP sub Windows este un pachet "All in one". Voi descrie pe scurt pasii necesari instalarii programului EasyPHP, o aplicatie care include serverul web Apache, interpretorul PHP, sistemul de gestiune a bazelor de date MySQL si aplicatia de administrare a bazelor de date phpMyAdmin.
* Descarcati EasyPHP. Mergeti la http://www.easyphp.org/ -> Download EasyPHP (sau direct la http://www.easyphp.org/save-easyphp-latest.php)
* Instalati EasyPHP (pastrand configurarile implicite). Nota: trebuie sa dezinstalati Apache, PHP sau MySQL de pe calculator (daca le aveti deja) inainte de a instala EasyPHP.
* Porniti programul EasyPHP (de cele mai multe ori va porni automat)
Nota: la avertizarile de la firewall (cel de Windows XP/Vista/7 sau orice alta aplicatie de securitate) trebuie sa se aleaga "Allow" sau "Unblock", altfel este posibil ca serverul web sa nu functioneze. * In fereastra de EasyPHP apasati F8 (sau click-dreapta pe iconita EasyPHP de langa ceas si alegeti Explore). Se va deschide un folder, care reprezinta locatia de unde vor fi luate fisierele cand sunt afisate in browser (de obicei C:\Program Files\EasyPHP-12.0\www). Aceasta locatie poarta numele de Document Root si aici trebuie puse toate fisierele .php pe care le scrieti.
* Salvati un fisier de test in locatia de mai sus. Dati-i un nume sugestiv, gen test.php. Editati fisierul astfel incat sa contina urmatoarea secventa:
* Intr-un browser mergeti la http://localhost/ (sau la http://127.0.0.1/). Va fi afisata o pagina cu linkuri catre fisierele/folderelor din Document Root care pot fi accesate. Faceti click pe test.php. O alta modalitate de a accesa un fisier este sa mergeti direct la http://localhost/[cale]/[nume].php, de exemplu http://localhost/test.php.
* Accesand http://localhost/test.php ar trebui sa fie afisata o pagina alba cu mesajul de mai sus.
Retineti! Toate fisierele PHP pe care le scrieti (inclusiv cele de test preluate de pe acest site) *trebuie* salvate in folderul Document Root (locatia implicita este C:\Program Files\EasyPHP-12.0\www). Web-server-ul local instalat de EasyPHP va cauta fisierele doar in aceasta locatie. Daca fisierele sunt salvate in alta parte ele nu vor fi disponibile.
Note:
* Pe unele sisteme Windows, EasyPHP configureaza diferit serverul web local, astfel ca scripturile sunt accesibile la http://localhost:8888/, caz in care aceasta adresa trebuie folosita. In orice caz, indiferent de configurare, serverul local se poate accesa din meniul EasyPHP (prin click-dreapta pe iconita EasyPHP de langa ceas si alegand optiunea Local Web).
* Pe majoritatea sistemelor Windows, fisierele PHP se salveaza cu extensie dubla (fisier.php.txt) atunci cand este folosit Notepad. Acest lucru este gresit iar fisierele trebuie redenumite astfel incat sa aiba doar extensia .php. Pentru a verifica extensia fisierelor, debifati optiunea Hide extensions for known file types din Folder Options (in Control Panel).
eu am vista. am salvat scroptul cu extensia .php in folder-ul www si apoi am incercat sa deschid http://localhost/. insa imi da urmatoarea eroare.Failed to Connect Firefox can't establish a connection to the server at localhost. Though the site seems valid, the browser was unable to establish a connection. * Could the site be temporarily unavailable? Try again later. * Are you unable to browse other sites? Check the computer's network connection. * Is your computer or network protected by a firewall or proxy? Incorrect settings can interfere with Web browsing. am verificat daca semaforul este pornit si este. chiar nu mai stiu ce sa fac.putin ajutor va rog
Salut. Pe Vista este posibil sa fie nevoie sa rulezi EasyPHP ca administrator. Incearca asa si vezi daca merge. De asemenea EasyPHP nu trebuie sa fie blocat de nici un firewall sau antivirus
Eventual ai putea sa incerci sa accesezi http://127.0.0.1/ in loc de http://localhost/ (cele 2 sunt echivalente, dar poate nu ai configurarile necesare in Windows ca sa mearga cu localhost)
am incercat cu 127.0.0.1 si a mers. dar nu inteleg de ce funtioneaza numai dupa ip.
In mod normal, 127.0.0.1 este adresa de drept care pointeaza catre serverul local. Numele de "localhost" este doar o asociere la acest IP (asa cum un URL ca google.ro este asociat unui IP anume).Asocierea asta dintre IP-ul local 127.0.0.1 si hostname-ul "localhost" se face, sub Windows, in fisierul C:\Windows\System32\drivers\etc\hosts Acesta trebuie sa contina, pe langa liniile comentate (care incep cu #) urmatorul rand: 127.0.0.1 localhost La tine este posibil ca aceasta linie sa lipseasca si din acest motiv browser-ul tau sa nu "stie" unde sa se duca atunci cand intalneste "localhost". Trebuie doar sa o adaugi pe un rand nou sub tot ce mai scrie pe acolo. Nota: poti pune orice nume in loc de localhost, daca vrei, si apoi il poti accesa in browser, de ex. http://serverulmeu/ Alta nota: trebuie sa rulezi Notepad-ul ca administrator ca sa poti modifica fisierul "hosts" in Vista
in momentul in care incerc incerc sa-i dau explore (F8) nu se intamla nimic, nu se deschide nici o fereasrta, nici o aplicatie, doar fereastra cu Apache si MySQL.PS :am win XP
Ar trebui ca fereastra de EasyPHP sa fie activa (focalizata) cand apesi F8.Daca tot nu vrea, fa click-dreapta pe iconita de langa ceas, din SystemTray (acel E negru) si alege "Explore F8" din meniu. Nu inteleg cum sa salvez ca php.Ati zis voi sa salvez ca text.php dar nu inteleg. Y!M pah_bog, ajutati-ma plzz
Bogdan, scrii codul php intr-un editor text (cum ar fi Notepad sau Notepad++, sau Dreamweaver, sau oricare altul). Apoi il salvezi cu extensia ".php" (alegi "File" -> "Save As", scrii numele fisierului, test.php, alegi "All files" la Save as type si faci click pe butonul "Save"). Fisierul trebuie sa-l plasezi in folderul www unde ai instalat EasyPHP.
multumesc de indrumari ma invirteam in jurul cozii si nu intelegeam de ce nu merge , nu erau salvate unde trebuie.bine scris
Imi place introducerea foarte buna pe care ai facut-o. Chiar mi-ai salvat ceva timp pentru documentare. Keep walking!
Am incercat sa dau explore sau F8 in toate conditiile : cu fereastra deschisa , din Sistem Tray click dreapta si apoi Explore F8, din aplicatie dand click pe e-negru si apoi pe explore F8 si tot nu se intampla nimik. I need some help pls. Am instalat WIN 7
Am pus o versiune mai noua si totul a mers, merci, cu versiunea data de voi nu merge ;)
salut ..as avea nevoie de umpic de ajutor ..deci instalez easy php , creez un document text , il redenumesc test.php si nu merge asa cum ar trebui ...uite niste screen-uri :1. http://img407.imageshack.us/img407/9248/64475952.jpg 2. http://img225.imageshack.us/img225/3759/88771263.jpg 3. http://img181.imageshack.us/img181/2849/13526371.jpg 4. http://img220.imageshack.us/img220/7553/93963155.jpg unde am gresit si ce as putea sa fac ? Multumesc anticipat . am facut cum ati zis si cand am dat in explorer la local host imi arata : Internet Explorer cannot display the webpageMost likely causes: You are not connected to the Internet. The website is encountering problems. There might be a typing error in the address. What you can try: Diagnose Connection Problems More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For offline users You can still view subscribed feeds and some recently viewed webpages. To view subscribed feeds Click the Favorites Center button , click Feeds, and then click the feed you want to view. To view recently visited webpages (might not work on all pages) Click Tools , and then click Work Offline. Click the Favorites Center button , click History, and then click the page you want to view. cand dau click dreapta pe iconita si aleg local web imi mergeadresa e http://127.0.0.1:8888/ editati fisierul httpd.conf(C:\Program Files\EasyPHP-5.3.5.0\conf_files\httpd.conf) in urmatorul loc: # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 127.0.0.1:8888 aici inlocuiti 8888 cu 80 puteti accesa configurarea direct din fereastra easyphp - click pe icoana cu "e" se deschide un meniu -> configuration -> apache
am incercat sa accesez atat http://localhost/ precum si http://127.0.0.1/ si nu functioneaza. Apare aceasta eroare Oops! Google Chrome could not connect to 127.0.0.1 cine ma poate ajuta cu un raspuns?programul funcioneaza fisierul test.php l-am salvat cum trebuie conexiunea la internet este in regula.nu inteleg ce nu merge bine. putin ajutor va rog :) chiar vreau sa invat php :) programare stiu ceva dar in c++
EasyPHP trebuie sa fie pornit si mai mult decat atat, trebuie sa ruleze si "Apache". EasyPHP-ul are un buton din care poti sa alegi "Start", "Stop", si asa mai departe. Asigura-te ca Apache-ul ruleaza, altfel n-o sa mearga http://localhost/.
Easy PHP este pornit atunci cant vreau sa intru pe http://localhost/ atat Apace cat si MySql sunt pornite
asa cum a spus si Viorel, click dreapta pe iconita de langa ceas - apoi Administration(ctrl+A)- in sectiunea LOCAL WEB click pe ROOT si apoi pe numele fisierul salvat in www saumai simplu: http://127.0.0.1:8888/numeFisier.php salvez in www si nu merge sa-l deschid nici cu http://localhost/ nici cu http://127.0.0.1:8888/ce sa fac?
Salut, daca ai Windows 7, incearca sa accesezi http://127.0.0.1/ (fara portul 8888). Este posibil ca numele localhost sa nu fie configurat pe anumite tipuri de Windows 7 (cum ar fi Home). Asigura-te ca firewall-ul nu blocheaza EasyPhp / Apache.
am salvat documentul ca test.php.txt in folderul www dar cind accesez pagina http://127.0.0.1/ sau http://127.0.0.1/8888/ sau locallhost imi apare ca nu exista.
Fisierul trebuie salvat ca test.php (fara extensia .txt). Este foarte important!Apoi acceseaza-l la una din adresele de mai jos. http://127.0.0.1/test.php http://127.0.0.1:8888/test.php Daca nu-ti merge, urmareste comentariile de pe site, sunt mai multe informatii de la alti utilizatori care au patit la fel.
am citit comentariul lui sebastian si am gasit ca la mine inloc de 127.0.0.1:8888 era 127.0.0.1:8080 acum merge totul ok!!!ms pt ajutor
Cum fac sa vad pe internet ce am in www?Vreau sa dau My IP pe google obtin ceva de genul 35.236.7.487 ? cum fac sa vad in internet orice sai sau document pus cu copy paste in www?Ce setari trebuie sa fac in httpd.conf din Apache? Ce port trebuie sa forwardez? Multumesc!
In principiu, trebuie sa configurezi Apache sa accepte conexiuni (sa "asculte") pe interfata de retea de LAN.Pe langa Apache, mai sunt cateva conditii care trebuie indeplinite pentru a putea accesa serverul local din exterior: - firewall-ul local trebuie sa permita conexiuni de intrare catre PC-ul tau pe portul 80 (sau portul specificat in httpd.conf) - ISP-ul (firewall-ul global al ISP-ului) trebuie sa permita conexiuni de intrare - router-ul (daca exista) trebuie sa faca port forwading catre serverul tau local - daca ai IP dinamic serverul de DHCP (sau router-ul) nu trebuie sa iti schimbe IP-ul. Daca toate aceste conditii sunt indeplinite, nu mai trebuie decat sa configurezi Apache-ul. Implicit, Apache asculta doar pe interfata loopback si accepta conexiuni doar pe IP-ul 127.0.0.1 (localhost). Ca sa poata accepta conexiuni din exterior (din LAN) trebuie modificata directiva Listen din httpd.conf, fie sa asculte pe toate interfetele de retea (toate IP-urile) sau explicit pe un IP extern anume. Cel mai simplu este sa fie configurat sa asculte pe toate IP-urile, caz in care httpd.conf va contine directiva Listen doar cu IP-ul: Listen 80 Trebuie restartat Apache dupa schimbarea httpd.conf si e gata.
de ce atunci cand dau clic pe local host , chrome-ul imi zice ca nu se poate conecta si imi apare mesajul urmator"Hopa! Google Chrome nu s-a putut conecta la localhost"..
Poate sa fie trei cauze:1. Nu este pornit serverul web (easyphp ruleaza?) sau este blocat de firewall 2. Serverul web este configurat sa ruleze pe alt port decat cel standard. Incearca http://localhost/ dar si http://localhost:8888/ 3. Windows-ul tau (sau sistemul de operare pe care il ai) nu stie ce inseamna localhost. In acest caz trebuie sa accesezi serverul local prin IP, nu prin nume. Incearca http://127.0.0.1/ sau http://127.0.0.1:8888/
merge da m-am chinuit ceva timp pana sa il aranjez si nici acum nu stiu ce am facut sa mearga :))
cand incerc sa intru imi apare :"Oops! This link appears to be broken." ..am incercat toate linkurile ... nimic...
Daca ai incercat toate solutiile din comentariile de mai sus si tot nu merge, incearca sa alegi optiunea "Local Web" din meniul de EasyPHP (click-dreapta pe iconita din system tray).
Buna Seara,Am instalat si executabilul sete EasyPHP-DevServer-13.1VC9.exe Dau click pe el si nu face nimic. Cele 2 puncte din numele executabilului sunt corecte? In easyphp.log mi-a scris urmatoarele: 02/02 21:37:01 EasyPHP EasyPHP Servers starting 02/02 21:37:07 EasyPHP Apache CreateProcess "C:\PROGRA~1\EASYPH~1.1VC\binaries\apache\bin\apache.exe" 02/02 21:37:07 EasyPHP CreateProcess "C:\PROGRA~1\EASYPH~1.1VC\binaries\mysql\bin\mysqld.exe --defaults-file="C:\PROGRA~1\EASYPH~1.1VC\binaries\mysql\my.ini" --language=english" 02/02 21:37:12 EasyPHP EasyPHP Check version Multumesc, Stela Toma Buna ziua,Am reluat cercetarile, si pe versiunea pe care am instalat-o, locatia Document Root este : C:\Program Files\EasyPHP-DevServer-13.1VC9\data\localweb Asa a mers. Imi puteti spune ceva despre aceasta versiune a lui EasyPHP sau merg in continuare pe studiul cursului dvs din acest site Multumesc, Stela Toma Buna,Versiunea de EasyPHP este buna. O poti folosi pe aceasta, atata timp cat totul functioneaza. Trebuie doar sa ai grija ca toate scripturile sa fie salvata in folderul care are rol de document root ("localweb" in cazul tau). Te astept cu alte comentarii daca ai alte probleme! Spor!
Multumesc mult
Oamenii prosti ma enerveaza cel mai mult pe lumea asta dar cei prosti care se dau inteligenti si ma si... frate daca unu nu poate sa acceseze localhost-ul daca nu stiti taceti dracu din gura si nu mai postati exact acelasi lucru cu cel de sus... numai inutilitaci .. dar va rog sa imi spuneti cum sa configurez apache daca nu imi merge localost-ul... voi cei prosti va rog nu imi raspundeti cu ce s-a zis de 1000 ori pe sus va rog.
Radu, intrebarea ta e atat de generala incat seamana cu jumatate din celelate comentarii postate pe aceasta pagina. Poti sa detaliezi cum anume "nu merge localhost-ul"? Ce te astepti sa faca si nu face? Ce erori primesti? Si, mai important, ai incercat sa urmezi sfaturile din celelalte comentarii?Ca o paranteza, procesul de invatare in acest domeniu (programare web, si programarea in general) se realizeaza DOAR prin incercari repetate pana cand obtii rezultatul scontat (fa o cautare pe internet dupa "Trial and error" pentru mai multe informatii). Succes si astept detaliile problemei tale.
Salut , am o problema , am EasyPHP deschis (Apache si MySQL) , cand accesez pe CHROME "localhost" imi apare imediat un mesaj cu "It works !" . Am creeat un fisier "test.php" cu scriptul care l-ai scris acolo , dar cand accesez tot pe chrome "localhost/test.php" imi zice ca nu se poate conecta la server , am incercat si cu IP-ul , si IP si port si tot nu merge , rezolva-ma te rog ca vreau sa ma apuc sa invat ! Multumesc anticipat !
Bravo, Alin!Ma bucur ca ai reusit singur sa rezolvi problema. Intr-adevar, in anumite situatii EasyPHP configureaza Apache-ul pe alt port decat cel implicit (mai mult ca sigur din cauza ca portul 80 este deja folosit de o alta aplicatie sau chiar o alta instanta de Apache). In cazul tau este 8887, dar am intalnit si alte cazuri in care serverul web era configurat pe portul 8888 sau 8880. Spor!
Am facut acel fisier test.php si cand am incercat sa il deschid cu localhost imi da o pagina alba si nu stiu de ce.
BunaAm o problema la instalare. Downloadez EasyPHP si cand sa instalez imi zice asta : http://prntscr.com/3ek7hn Care este oare problema ? Buna. In primul rand vreau sa va multumesc pentru acest site. Contine informatii foarte utile, simple si concise, si mai ales gratuite, ceea ce e de apreciat.Ca sa trec la subiect dupa ce am instalat EasyPhp 14.1, mi-a aparut un mesaj de genul :"The Program can't start because MSVCR110.dll is missing from your computer. Try reinstalling the program to fix this problem. ". M-am informat putin si am instalat de pe site-ul microsoft Microsoft Visual C++ Redistributable 2012. Dar aceeasi chestie. ( imi arata si ca ¨Unexpected end of Apache¨. Ceva sugesti? Multumesc Buna Lavinia,Tot ce pot sa iti sugerez este sa dezinstalezi EasyPHP (din control panel) si apoi sa-l reinstalezi (dupa instalarea dll-ului de la Microsoft). Mare atentie la varianta de installer de EasyPHP pe care o alegi (VC11 sau VC9) - trebuie sa fie compatibila cu runtime-ul de la Microsoft instalat si cu sistemul tau de operare. Succes!
Alexandru, multumesc de sfat. Am facut un clear boot...si apoi am instalat VC9....si meeerge. Insa fisierele .php.txt merg, cand modific extensia si las doar .php imi da eroare.....
-eu stiu ca xampp-ul se foloseste mai mult ,se poate utiliza si versiunea portabila
Totul a mers ca la carte. Multumesc!
salut Alexandru,Nu functioneaza nici http://localhost/ nici http://127.0.0.1/ si am incercat toate variantele am oprit windows firewall si antivirus, am cautat httpd.conf. Am avut si eu problema cu dll-ul cum a avut si Lavinia...am dezinstalat, instalat din nou. Am incercat si cu VC11 si cu VC9, din cate am inteles de pe site-ul lor amandoua sunt bune pentru windows 7. oricum nicio diferenta, ceva idei ce as mai putea incercat? P.S multumesc pentru acest tutorial. Aceeasi prob ca Doru cu Win7.Nu merge sa rulez easyPhpCeva idei,teorii? Multumesc!
Cu greu, dar am reusit, st foarte utile comentariile. Ms.
Bună. Am instalat php, am făcut ce mi s-au cerut, dar nu știu dacă așa ar trebui sa se deschidă. Folosesc Notepad++ și am Windows 7 ultimate. Că să pot deschide pagina alba cu testul trebuie sa deschid Notepad++ și dau la rulează, launch in Chrome și îmi apare tot ce am scris. Este corect cum fac? Că am incercat cu toate, localhost, 127.0.0.1 și așa mai departe. Va rog sa mi spuneți, chiar vreau să învăț!
## De ce imi trebuie un server web? 54
Instaland pachetul EasyPHP, calculatorul personal devine un server web (local, ce-i drept - accesibil doar de catre tine). Practic PC-ul se comporta ca un site ce are adresa http://localhost/ (sau http://127.0.0.1/). Este foarte important ca fisierele PHP sa fie accesate prin intermediul serverului web, deoarece acesta recunoaste scripturile PHP si invoca automat interpretorul PHP. Fara aceasta intermediere oferita de web-server, scripturile PHP nu ar fi procesate ci trimise asa cum sunt la browser.
De exemplu, avem un fisier care contine doar urmatorul cod: (atentie la semnul ! in afara tagurilor)
`!` Sa presupunem ca fisierul se numeste salut.php- Daca il accesam in browser folosind adresa "http://localhost/salut.php" (prin intermediul serverului web local) atunci *interpretorul PHP este invocat*, scriptul este executat iar rezultatul procesarii afisat de browser va fi "Salut!". - Daca accesam fisierul direct de pe disc, scriind in browser calea lui "file:///C:/Program Files/EasyPHP-12.0/www/salut.php" se va afisa (in cel mai fericit caz) doar "!". Asta pentru ca interpretorul PHP *nu este invocat*, iar fisierul este transmis ca atare, cu secventa de cod PHP ne-interpretata (accesati view-source din browser pentru a verifica). Buna,cand accesez http://localhost/ nu ma duce in fisierul dorit. Imi da eroare. De ce?* Eu nu am facut deloc informatica, dar am vointa de fier si sper sa invat. :) Trebuie sa invat neaparat sa lucrez in acest program. Multumesc! Buna,In primul rand trebuie sa te asiguri ca EasyPHP ruleaza si ca serverul web este pornit. Serverul web se numeste "Apache" iar in programul EasyPHP trebuie sa ai un semafor verde in dreptul lui (ca in imaginea asta: http://php.punctsivirgula.ro/public/resurse/easyphp.jpg). Daca luminita semaforului nu este verde inseamna ca ceva nu da voie serverului web sa porneasca (in acest caz trebuie sa te asigur ca EasyPHP nu e blocat de Windows sau de vreun firewall). Poti incerca sa pornesti apasand pe butonul "Apache" si alegand "Start". Daca serverul web este pornit, atunci probabil nu ai salvat bine fisierul tau. Deschide Document Root-ul tau (in mod normal Document Root este folderul "C:/Program Files/EasyPHP 3.0/www/", dar depinde de locatia unde este instalat EasyPHP). In acest folder, asigura-te ca exista fisierul salvat de tine (salut.php). Este important sa aiba extensia .php altfel nu va fi recunoscut de serverul web. La final, acceseaza direct fisierul mergand la adresa http://hocalhost/salut.php Daca tot nu merge, posteaza te rog eroarea care este afisata - asa putem sa ne dam seama mai bine unde este problema. Sper sa te ajute! Object not found!The requested URL was not found on this server. If you entered the URL manually please check your spelling and try again. If you think this is a server error, please contact the webmaster. Error 404 localhost 03/07/09 23:55:41 Apache/2.2.11 (Win32) PHP/5.2.8 mie asa imi arata cand ii dau ii dau pe http://localhost/salut.php Hmm, atunci mai mult ca sigur fisierul nu este salvat unde trebuie.Ca sa poata fi accesate de web server, scripturile trebuie puse obligatoriu intr-o locatie anume (nu pot fi salvate, de exemplu, in My Documents). Ca sa verifici, cel mai bine mergi direct la http://localhost/ unde ar trebui sa fie afisata o lista cu fisierele existente in acea locatie (lista e generata automat de EasyPHP). Daca numele fisierului (salut.php) nu apare in lista aia, inseamna ca nu a fost salvat unde trebuie. Daca apare, atunci e suficient un click pe el. Spuneti-mi daca reusiti! Sa presupunem ca fisierul se numeste salut.php.- Daca il accesam in browser folosind adresa "http://localhost/salut.php" (prin intermediul serverului web local) atunci interpretorul PHP este invocat, scriptul este executat iar rezultatul procesarii afisat de browser va fi "Salut!". Am respectat toti pasii si nu imi afiseaza numai "Salut" Imi arato tot ce am scris. De ce? Este ciudat ce spui, mai ales daca ai instalat EasyPHP.In mod normal problema descrisa apare daca ai un server web dar nu si modulul PHP. Daca acesta este cazul (ai instalat separat serverul web), verifica daca interpretorul PHP este instalat si configurat (detalii in limba engleza gasesti aici, la sectiunea "Installing PHP": http://adrian.lastdot.org/tutorial-install-apache-php-mysql ) Daca ai instalat EasyPHP (ceea ce iti si recomand pentru inceput), incearca sa mergi direct la http://localhost/ Daca pe pagina asta nu vezi nimic inseamna ca sunt probleme cu serverul web. In acest caz iti recomand sa dezinstalezi EasyPHP si sa o iei de la capat, sau, eventual sa incerci o instalare separata a serverului web si a PHP-ului asa cum este explicat in tutorialul mentionat mai sus. Nota: scripturile tale trebuie sa aiba obligatoriu extensia .php. Daca este ceva de genul "salut.txt" sau "salut.php.txt" o sa patesti acelasi lucru pe care l-ai descris tu (iti apare tot codul fara sa fie interpretat)
trebuie doar instatal EasyPhp? eu nu am nimic instalat pt php, gen apache sau mysql .. neaparat trebuiesc instalate si cele 2? (apachele si mysql`ul)
Depinde de ce vrei sa faci. Spre exemplu, PHP ofera posibilitatea de a rula scripturile din linia de comanda (de genul, deschizi Command Prompt si scrii "php script.php", iar pe ecran o sa fie afisat rezultatul scriptului). Desi aceasta facilitate este folosita mai rar, ea exista si pentru a o utiliza nu este nevoie de Apache, ci doar de PHP instalat separat.Daca in schimb vrei sa accesezi pagini dinamice PHP in browser, e obligatoriu sa ai un server web (Apache). Asa cum am mai zis, poti sa instalezi separat Apache si separat PHP si apoi sa le configurezi, dar este mai simplu (si fara dureri de cap) sa folosesti EasyPHP. EasyPHP este un pachet ce contine Apache+PHP configurate sa functioneze impreuna - aplicatii care iti sunt suficiente pentru a folosi si lucra in PHP. In plus EasyPHP mai are si MySQL - un sistem ce permite lucrul cu bazele de date (inca nu sunt lectii pe site despre cum se foloseste MySQL, dar vor fi in curand). Asadar, EasyPHP este *suficient* - odata instalat nu mai ai nevoie de altceva pe calculator.
ms mult pentru explicatie :) raman dator .. eu vroiam sa fac un joc in php si de aceea am intrebat :)
Salut. O alternativa buna este wampp. Locrez frecvent cu el, are un install care face toata treaba si este usor de utilizat. Felicitari pentru site. Daca il poti dezvolta ar fi OK. Succes si astept noi "lectii".
Mersi mult pentru aceasta documentatie PHP si succes in continuare cu site-ul. Sper sa "creasca" cat mai rapid si bine!
Daca va va arata in browser tot codul php mergeti intr-o fereastra de folder la optiunea TOOLSFolder Options Wiew si debifati optiunea Hide extensions for known file types Multa bafta! Multumec! Multumesc Multumesc!Caut de mai bine de o saptamana o explicatie simpla, exacta si completa care sa ma indrume sa execut un simplu fisier php si m-au luat durerile de cap,... am instalat de nu stiu cate ori, mai multe versiuni de php si fiecare imi da alte solutie de configurare, implementare si as mai departe,... nu am inteles explicatii de genu copiaza fisiere de aici aic, daca nu exista directorul x, trebuie creat si copiat acolo x,... trebuie configurat un program cu altul, si nu stiu de ce atunci cand erau facute arhivele nu contineau si directoarele obligatorii sau dece la instalare nu erau copiate fisierele si in directorul windows si trebuia sa o fac eu, si oricum nu-mi functiona... acum pot sa spun ca am facut primul pas!
Probabil nu are rost acest mesaj dar tin sa-ti multumesc pentru primul pas ! este primul tutorial cu care am reusit sa fac ceva ... ! Felicitari si bafta cu site-ul !
am invatal la sc html si vreau sa invat singura si php-ul.am o problema.....nu inteleg faza cu extensiile ascunse...bun am debifat acea casut dar acum toate mi se salveaza cu .txt (prima.php.txt)... si nu merge ...???????????
gata am reusit era o greseala a mea:*
Fain frate... dupa o zi si ceva de cautari, instalari si dezinstalari, aici am gasit ceva sa functioneze totul.Am retinut punctul si virgula ca sa-l mai vizitez in speranta ca voi vedea dezvolatrea ulterioara Bafta!!
Pentru a lucra cu php e nevoie de un editor text. Cred ca ar fi util pentru incepatori un editor care sa coloreze functiile si elementele importante dintr-un program. Am vazut intr-un filmulet un astfel de editor, stie cineva cum se numeste? http://video.hotpin.co.cc/watch.php?vid=a5efd640
Salut Adi,In filmuletul acela se foloseste Adobe Dreamweaver. Ca alternativa (gratuita) puteti folosi Notepad++, psPad sau Eclipse cu pluginul de PHP. Un editor bun, specializat pe PHP, dar care necesita licenta, este Zend Editor. Eu personal folosesc Notepad++ :)
notepad++ rulles am reusit
daca scrieti http://localhost/ si primiti eroare incercati asta : http://127.0.0.1/deoarece varianta http://localhost/ nu functioneaza in windows 7
mie imi spune ca "internet explorer cannot display this page" iar la diagnostic imi spune ca "this remote device or resource won't accept the connection". care e de fapt problema? ce trebuie sa fac? multumesc!
cum aflu mailul tau personal sa te intreb ceva legat de php am o problema si nu reusesc sa o fac.
Salut! Vezi pagina http://php.punctsivirgula.ro/?despre La final scrie cum ma poti contacta direct
cand accesez http://localhost/salut.phpimi apare : This site is running TeamViewer. ...dcccc? Asigura-te ca nu ruleaza programul TeamViewer pe calculatorul tau si reporneste serverul web (la EasyPHP alege restart pentru Apache).TeamViewer e o aplicatie de "remote assistance" care foloseste portul 80 pentru a comunica prin internet. Ca idee, o singura aplicatie poate asculta (folosi) un port la un moment dat, deci TeamViewer nu trebuie sa ruleze in acelasi timp cu serverul web Apache. Cand scriu "http://localhost" imi apareThis webpage is not available The webpage at http://localhost/ might be temporarily down or it may have moved permanently to a new web address. Error 102 (net::ERR_CONNECTION_REFUSED): Unknown error. De ce nu imi merge? Asigura-te ca EasyPHP ruleaza, iar serverul Apache este pornit. Vezi si sa nu fie vreun firewall/anti-virus care sa blocheze conexiunea.Connection Refused inseamna ca "ceva" nu da voie browserului sa se conecteze la serverul web.
Buna! Am instalat EasyPHP, imi apare verde la semafor, cand intru de pe Internet Explorer pe http://localhost/ imi scrie "It works!" in schimp cand dau sa accesez test.php (care este salvat in fisierul care trebuie) imi apare tot codul nu doar mesajul. Cum fac sa imi apara cum trebuie?
Buna,Este posibil ca fisierul sa fi fost salvat cu o extensie dubla, de tipul test.php.txt. Sunt mai multe comentarii mai sus care explica ce e de facut. Am inclus si eu 2: 1. =================================== Din control panel deschide "Folder Options", (sau direct dintr-o fereastra de My Computer, poti apasa ALT+T). Trebuie ales: Folder Options -> Wiew -> debifeaza optiunea "Hide extensions for known file types". Apoi unde ai salvat fisierul verifica sa nu aiba extensia .php.txt 2. =================================== Scrii codul php intr-un editor text (cum ar fi Notepad sau Notepad++, sau Dreamweaver, sau oricare altul). Apoi il salvezi cu extensia ".php" (alegi "File" -> "Save As", scrii numele fisierului, test.php, alegi "All files" la Save as type *important*). Spune-mi daca tot nu reusesti. Spor, Alexandru
Era debifat "Hide extensions for known file types" si extensia am verificat sa fie sigur .php. Am folosit si Notepad si Notepad ++. Same story. Eu folosesc acum pe pc un preview de la Windows 8 si singura idee la care ma gandesc acum e ca e posibil sa fie din cauza asta. Sa nu ruleze programul cum trebuie pe tipul asta de Windows. Si inca ceva. Cand dau click dreapta --> open with --> Internet explorer, imi apare o casuta cu View Downloads si am "Open" si "Save" si nu mi se deschide nimic. Si la Opera continua sa imi scrie codul intreg.
Nu am testat pe Windows 8 - e posibil sa fie ceva ce nu functioneaza cum trebuie. As tinde sa cred ca installer-ul de EasyPHP nu a reusit sa faca toate configurarile (pare ca serverul web Apache functioneaza dar nu este configurat modulul PHP).Ai putea incerca sa reinstalezi, ruland installerul cu drepturi de Admin si fara alte restrictii (gen UAC - User Account Control). O alta solutie ar fi sa incerci sa configurezi manual Apache pentru a interpreta scripturile PHP, nu doar sa le afiseze ca si text. Nu e dificil, dar iti poti prinde urechile usor :) - Detalii in engleza pe site-ul php.net: http://www.php.net/manual/ro/install.windows.apache2.php, sectiunea Installing as an Apache handler
Vreau sa fiu vazut de cineva pe internet!Sa pun un document in www si respectivul sa tasteze My IP cu portul forwatdat in router apoi numele documentului sau saitului pus in www.Cum fac?
Salut,Trebuie sa configurezi Apache-ul sa asculte pe toate interfetele de retea. Implicit el asculta doar pe interfata loopback (adica pe localhost/127.0.0.1). Ca sa poti accesa serverul si din alta parte decat localhost trebuie ca Apache sa fie configurat si pentru reteaua LAN - si implicit sa asculte pe IP-ul tau extern. Sunt doua posibilitati: configurezi Apache-ul sa asculte pe orice interfata (in httpd.conf la directiva Listen pui doar portul 80) sau configurezi explicit IP-ul extern (specifici IP-ul in directiva Listen, sub forma Listen 82.xx.xx.xx:80) Cel mai simplu ar fi sa il lasi sa asculte pe orice IP - deci in httpd.conf sa pui doar Listen 80. Mai multe detalii aici: http://httpd.apache.org/docs/2.0/mod/mpm_common.html#listen Important! Odata ce serverul tau este disponibil "out in the wild" ai mare grija la securitate! Spor! Multumesc pt site!Esti" mar de langa drum si fara gard"(Sunt mar de langa drum si fara gard, la mine'n ramuri poame rosii ard, drumtule de vrei sa ei fara sfiala ca n-o sa dai la nimeni socoteala."
Multumesc pentru vorbele frumoase, Ioan. Ma bucur ca iti este de folos site-ul.
Nu pot sa scap de extensia .txt. In Document Root nu imi apare, dar in easyphp imi apare. si drept urmare nu se poate activa. Ce solutie am?
Pentru a elimina extensia dubla .php.txt.Din control panel deschide "Folder Options". Trebuie ales: Folder Options -> Wiew -> debifeaza optiunea "Hide extensions for known file types". Apoi mergi unde ai salvat scripturile si redenumeste fisierele sa nu aiba extensia .php.txt
Urmind instructiunile am reusit instalarea si testarea programelului dar a doua zi, la repornirea calculatorului disparuse totul(mai putin directorul in Program files si programelul) si a trebuit sa instalez din nou . Si asa de fiecare data. Chiar trebuie instalat mereu sau se poate si altfel ?
Atunci cand dau click pe link http://localhost/ sau http://127.0.0.1/ imi scrie "Oops! This link appears to be broken." (in Chrome) . Dar cand dau click dreapta pe iconul de langa ceas si dau "local web" imi merge, de ce oare?
Am creat in Notepad fisierul salut.php in care am scris codul <?php print "Salut"; ?> !. Am accesat fisierul salut.php folosind ca browser Google Chrome scriind in bara de adresa http://127.0.0.1/salut.php, apoi file:///C:/Program%20Files/EasyPHP-DevServer-13.1VC11/data/localweb/semn.php. Pe calculatorul personal am instalat Windows 7. Am reusit si multumesc pentru explicatia amanuntita.
Am instalat easyphp12, dar am cont si pe un site freehost (www.000webhost.com) unde mi-am urcat lucrarile si acolo are si phpmyadmin-ul. e in regula daca lucrez in continuare cu acel freehost? Adica sa imi urc acolo fisierele php si sa le deschid acolo
Este foarte bine si asa, doar ca dupa fiecare salvare a scriptului trebuie sa faci upload pe serverul de hosting. E mai usor totusi daca ai un editor care deschide fisierele direct prin FTP si face automat download/upload.In orice caz, e alegerea ta.
Folosesc Notepad ++ si face automat upload/download pe acel server de hosting. AM instalat s easyphp in ideea de a verifica functionalitatea scripturilor direct de pe calculator. Altfel nu le-as fi putut verifica decat dupa ce le urcam pe serverul de hosting. Acum pot sa le urc doar dupa ce ma conving ca functioneaza bine. Oricum voi continua sa folosesc si serverul de hosting.Mersi pentru raspuns!
Salut, Am scris urmatorul cod :<? print "Salut,ai reusit! Asta e primul tau script PHP" ?> iar cand intru pe http://localhost/ sau pe http://127.0.0.1/ imi apare fisierul test.php dar cu semnul intrebarii in stanga iar pagina o deschide goala (alba). Am debifat in folder options si am sters extensia txt dupa php. Ma puteti ajuta cu un raspuns?Multumesc anticipat.
Vezi ca folosesti tagurile scurte de php: <? si ?>. Ar trebui sa folosesti marcatorii standard: <?php si ?>. Mai mult ca sigur, serverul tau local nu e configurat sa invoce interpretorul PHP pentru tagurile scurte. Citeste si prima sectiune de pe pagina: http://php.punctsivirgula.ro/basics/Spune-mi daca reusesti. Treaba este rezolvata , pentru cei care au problema mea ,mie mi se conecta la http://127.0.0.1:8887/!!!Cel mai usor este sa dati click dreapta pe iconita de la EasyPHP(12.1) si dati pe Local Web :)(f7)!!!
Salut, Alexandru ! Trimite-mi la adresa de email <EMAIL> un PM , vreau sa te intreb ceva ! Multa stima !
http://127.0.0.1/ si am reusit tot pana aici dar de aici e mai complicat daca ma poti indruma in continuare ar fii o idee iti las id meu de mess ignat.mirel86
''C:Program FilesEasyPHP-12.0www) + Document Root''Mie nu-mi apare asa,cand dau explore si nu stiu cum sa incep.Mie imi deschide aici : Local Web si imi da my partable files,projects si script.Unde salvez fisierul de test? By the way,va multumesc din suflet pentru tot ce este aici,sunteti de mare ajutor.
Cand dau click pe local web imi apare : Connection refused: 127.0.0.1:8888. Ma puteti ajuta?
salut poate ma ajuti si pe mine sa editez fisierul httpd.config din apache (easyphp)imi cam prind urechile in el
Am reusit sa fac aceasta prima lectie.Multumesc pntru tutorial! A fost un pic greu la inceput pt. ca a trbuit sa schimb ServerName si Listen la valoarea 81. Si sa nu uit: browserul trebuie sa fie deschis pentru ca semaforul Apache sa fie verde! :) :)
Merci si salutãri !
## Testarea online a codului-sursa PHP 8
In cazul in care nu reusiti (sau nu doriti) sa instalati PHP pe calculatorul personal, puteti testa exemplele si secventele de cod-sursa direct de pe site.
In dreptul fiecarei portiuni de cod-sursa ce poate fi testat exista un buton numit "Testeaza", in coltul din dreapta-sus. Apasand acel buton se va deschide o noua fereastra ce permite vizualizarea secventei de cod si a rezultatului executiei. Din acea fereastra se poate modifica secventa de cod si se poate executa in noua varianta. Totul se face online, cu ajutorul unui instrument pus la dispozitie de site-ul www.ideone.com, fara a fi nevoie sa aveti PHP instalat local.
Aceasta facilitate este utila pentru verificari imediate si pentru a testa rapid exemplele de pe site. Este recomandat totusi sa instalati un server web si interpretorul PHP pe calculatorul personal, intrucat veti avea nevoie de ele pentru aplicatiile mai complexe.
Firefox nu poate stabili o conexiune cu serverul 127.0.0.1:8887.Am incercat toate variantele de mai sus...La prima instalare s-a executat ok. dupa care nu a mai pornit serverul. Am dezinstalat si instalat de mai multe ori. Acest sait poate fi indisponibil temporar sau prea ocupat. Încercați din nou în câteva momente. Dacă nu puteți încărca nicio pagină, verificați legătura la rețea a calculatorului dumneavoastră. Dacă calculatorul dumneavoastră este protejat de un paravan de protecție sau un proxy, asigurați-vă că Firefox are permisiunea de a accesa internetul.
am instalat serverul si cind deschid fisierele php si scripturile nu sunt procesate imi apar exact cum sunt scrise in blocnot frazele?
Trebuie urmariti toti pasii cu atentie. Nu are cum sa nu va iasa daca urmariti toti pasii , nu va grabiti
pana aici simplu de inteles
Dupa ce il instalez, nu se deschide nimic. Imi apare doar langa ceas sigla de la program.Cand dau click dr-explore pe ea, nu am niciun folder Document root. Ma poate ajuta cineva?
Cand dau click pe local web imi apare : Connection refused: 127.0.0.1:8888. Ma puteti ajuta?
daca las extensia .php.txt fiserele se afiseaza. dar daca las doar .php cand rulez imi afiseaza: ¨Parse error: syntax error, unexpected 'e' (T_STRING) in C:Program Files (x86)EasyPHP-DevServer-14.1VC9datalocalwebprojectsexercitii.php on line 3¨.cum sa o rezolv? Eu am ubuntu,mai trebuie instalat vreun apache ca să pot rula localhost?Și dacă nu trebuie, unde pun fișierele și cum fac localhost-ul activ?
Adauga un comentariu la aceasta sectiune.
Date: 2009-05-23
Categories:
Tags:
## Taguri PHP, instructiuni, punct si virgula
## Afisarea textului in PHP 11
In multe din scripturile scrise de incepatori (si nu numai) continutul rezultat in urma executiei este un text (de obicei in format HTML). Pentru a obtine acest text, secventele de cod trebuie sa contina instructiuni explicite care sa "spuna" intrepretorului ce anume trebuie afisat.
Instructiunile de afisare sunt print, echo si printf. Primele doua sunt echivalente si fac acelasi lucru (exista unele diferente de forma, vezi exemplul de mai jos, dar poate fi folosita oricare dintre ele); printf este folosita mai rar datorita sintaxei oarecum greoaie.
```
", "Afisez", " un text din ", 4, " bucati";
# o singura intstructiune print poate primi doar unul
print "
```
";
print "Afisez";
print " un text din ";
print 4;
print " bucati";
# printf este folosita pentru a formata continutul, la fel ca in C/C++
printf( " Am %4.2f lei", 102.123456 ); // afiseaza Am 102.12 lei
?> <?phpecho "Este 3 mai mic decat 2? <br />"; if( 3 < 2 ) { print "3 < 2"; print "3 este mai mic decat 2"; } ?> este un exemplu unde ar trebui sa se vada cum indiferent de instructiune daca lipsesc {} tot ce urmeaza dupa prima instructiune este executat indiferent de conditie. Cred ca trebuiesc sterse {} ca sa se afiseze ce ai spus ca va afisa P.S. E foarte util site`ul chiar sunt incepator si chiar se explica totul pe intelesul incepatorilor. Felicitari eu am facut C++dar nu stiu la ce se foloseste instr <br/>..ma poti ajuta?"/n"-este folosit pentru a afisa pe un rand diferit..sau pentru ce..ca nu mai tin minte
pt cristina, in C++ nu exista <br/>, iar "/n" e folosit pt linie noua
stiu,e o diferenta foarte mare de timp intre intrebarea cristinei si raspunsul meu,dar poate le va fi de ajutor altoratagul <br> semnifica linie noua in html dupa modificarile de rigoare in notepad, salvez, reload pagina si nu incarca pana nu dau restart apache.Aveti vreo solutie?
Ionut, probabil ai activat Alternative PHP Cache (APC) si este configurat in asa fel incat script-urile PHP nu sunt verificate daca au fost modificate. Prin urmare, cache-ul de la nivelul apache-ului nu este actualizat decat la restart. O solutie ar fi sa activezi verificarea scripturilor (configurarea apc.stat activata) sau eventual sa dezactivezi APC complet. Detalii aici: http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat
Buna si multumesc pentru lectii - foarte utile.O problema ar fi testarea pe Ideone - tagurile <br /> sunt considerate text, nu instructiuni html. De exemplu pentru codul de la Afisarea textului in PHP apare ca rezultat TextText<br />Afisez un text din 4 bucati<br />Afisez un text din 4 bucati<br />Am 102.12 lei Salut,Ma bucur ca iti este util site-ul. Legat de IdeOne, mie mi se pare corect modul de afisare. Practic iti arata exact ceea ce returneaza scriptul PHP (text cu taguri), ceea ce poate fi util in anumite situatii, in special in perioada deprinderii cu programarea in PHP. La urma urmei, PHP afiseaza un text. Faptul ca il vizualizezi cu un browser (si ca apare diferit de cum l-ai vedea cu Notepad) nu are legatura stricta cu procesarea server-side. Un alt avantaj, zic eu, este ca te "forteaza" sa incerci exemplele pe calculatorul personal daca vrei sa vezi rezultatele altfel. Lucru bun, pentru ca te face sa scrii cod, nu doar sa citesti cod :) Succes! <NAME> "printf" avem:"%4.2f", ".2"-ul ala spune cate zecimale se iau. 4 la ce se refera? "%4.2f"Intr-adevar ".2" se refera la nr. de zecimale din rezultatul afisat, iar "4" inseamna lungimea minima a numarului afisat (numarul de caractere). Altfel spus, "%4.2f" exprima un numar cu cel putin 4 cifre din care 2 zecimale.
O intrebare nu inteleg la printf "%4.2f" 4 e numarul total de caractere si 2 e numarul de zecimale? daca punem "%2.1f" mie imi afiseaza acelasi lucru...?
## Ce afiseaza PHP? 6
Rezultatul unui script PHP este de cele mai multe ori un text simplu in format HTML. Cu alte cuvinte, in majoritatea cazurilor PHP returneaza o pagina HTML ce va fi afisata in browser. La inceput, acest lucru poate genera confuzii, intrucat sursa HTML este diferita de ceea ce se afiseaza efectiv in browser.
Spre exemplu, fie urmatoarea secventa de cod:
`` Probabil v-ati astepta ca rezultatul sa fie un text afisat pe 2 linii. Salvand aceasta secventa intr-un fisier PHP si accesandu-l prin intermediul unui web-server, veti observa ca rezultatul este urmatorul:
```
SalutAcesta este un script simplu
```
Desi este confuz, rezultatul este corect. Codul PHP afiseaza doar caracterele care i-au fost indicate. Sfarsitul de linie reprezinta un caracter separat, ne-printabil, dar care controleaza cum apare textul pe ecran. In cazul nostru, nu s-a transmis acest caracter (numit si new-line) si prin urmare PHP nu l-a afisat.
Sa rescriem exemplul de mai sus pentru a se afisa si caracterul "sfarsit de linie"
`` Verificand iar in browser veti constata ca nu s-a schimbat nimic. La prima vedere. In realitate rezultatul este afisat pe 2 linii, in textul trimis de PHP catre browser. Intrucat pagina este interpretata ca fiind HTML, browserul ignora caracterele new-line. Pentru verificare vizualizati sursa paginii (meniul View din browser -> view source).
Pentru a ajunge la rezultatul dorit (acela de a afisa un text pe 2 linii in pagina) trebuie folosita urmatoarea secventa:
```
"; # se afiseaza tagul BR ce va fi interpretat de browser ca 'linie noua'
# linia noua va fi afisata in pagina finala HTML dupa ce este interpretata de
# browser si nu are nici o legatura cu \n
print "Acesta este un script simplu";
?>
```
Sursa paginii arata in felul urmator: `Salut` Acesta este un script simpluIn browser tagul BR este interpretat ca sfarsit de linie (line break). Caracterul "new-line" afisat de noi (in print "\n") este ignorat oricum in HTML. Rezutatul, asa cum se vede in fereastra browser-ului:
```
Salut
Acesta este un script simplu
```
Nota: Este foarte important sa se inteleaga diferenta dintre ceea ce se returneaza in urma executiei unui script PHP si ceea ce se afiseaza propriu-zis in browser. Pe scurt, PHP printeaza cod HTML care este afisat diferit de browsere.
De asemenea, trebuie inteles faptul ca functia print nu afiseaza textul pe mai multe linii daca nu se specifica acest lucru in mod express. Spre exemplu fie urmatoare secventa de cod:
`` Rezultatul va fi `123` S-a afisat doar ce s-a indicat, 3 caractere, pe o singura linie, fara spatii.
Un alt lucru important de retinut este faptul ca odata printat, un text nu mai poate fi "sters". Nu exista "undo" pentru un print. Se poate captura intregul continut ce se transmite la finalul executiei, dar nu se poate altera textul afisat de o instructiune print anume.
Intelegand aceste aspecte va va fi mai usor in lucrul cu PHP si in modul in care verificati rezultatele scripturilor voastre.
am creat fisierele, le-am pus in folder dar nu merge, atunci cand dau click pe link imi spune "Oops! This link appears to be broken." .Eu dau click dreapta pe iconul de langa ceas, dau "local web" , dar cand se deschide in browser imi apare scris exact ceea ce este in document, fara nici o schimbare :/
Ai grija ca fisierul sa fie salvat cu extensia .php si nu .php.txt. Este important. Citeste comentariile de la celelalte sectiuni si o sa afli situatii similare cu a ta. Spor!
Salut , la mine cand scriu print "\n" imi afiseaza in browser exact asa "\n" dar nu trece scrisul pe alta linie , in schimb cu print "<br>" merge ...
Probabil ca la print scrii expresia \n intre ghilimele simple (apostrof): print '\n'; si nu intre ghilimele duble.In cazul asta se va printa textul asa cum este, fara ca expresia \n sa fie transformata in linie noua. Mai multe detalii la http://php.punctsivirgula.ro/string/ In alta ordine de idei, elementul <br> este diferit de \n in sensul ca actioneaza asupra modului in care browserul tau afiseaza textul ("rupe" linia, de la <br> = brake). Pe de alta parte expresia new-line ("\n") este cea care introduce o linie noua in textul rezultat in urma executiei. Acel text nu trebuie neaparat sa fie afisat in browser (ar putea de exemplu sa fie scris intr-un fisier, sau pe o conexiune de date). Important este sa faci diferenta intre <BR> si "\n" si sa stii cand trebuie folosit fiecare. Spor! cand scriu "\n" textul nu se duce jos adica nu face o linie noua ci doar lasa spatiu intre paragrafe de exeplu daca scriu asa print "ceva"print "\n" print "altceva" o sa imi apara: ceva altceva,si nu unul sub altul cum ar trebui.
sall spune terog cum sa dau variabila de la tastatura in php ca mam uitat da parca nam observat nimic
## Siruri de caractere 16
In majoritatea scripturilor PHP se lucreaza cu bucati de text denumite siruri de caractere sau string-uri. Sirurile de caractere reprezinta expresii (entitati ce au/returneaza o valoare) si pot fi folosite, la afisare, in atribuiri, la verificari, etc.
In limbajul PHP, sirurile de caractere pot fi delimitate de ghilimele duble, ghilimele simple sau printr-o notatie speciala folosind operatorul <<< (numit heredoc).
Nota: in acest exemplu nu este nici o diferenta intre modul de definire a sirurilor - toate cele 3 stringuri afisate sunt echivalente, indiferent ca au fost delimitate prin ghilimele, apostrof sau operatorul heredoc. Cu toate astea, PHP trateaza in mod diferit aceste moduri de delimitare. Mai multe explicatii si exemple sunt prezentate in pagina Operatii cu siruri.
Imi pare o eroare : Parse error: parse error in C:\Program Files\EasyPHP 3.0\www\ma_numesc.php on line 15
Da, erau 2 probleme cu secventa de cod de mai sus:- lipsea punct si virgula (;) la al treilea print - exista un spatiu dupa "print <<<TXT" ceea ce facea ca interpretorul PHP sa dea eroare. Problemele au fost corectate acum. Nota: Pentru a vedea rezultatul scripturilor si a observa diferentele e bine sa te uiti in sursa paginii HTML (View -> View Page Source) acea eroare a aparut iar:spatiu dupa "print <<<TXT" mie nu-mi citeste codul acela pe mai multe randuri si nici cu codul \n nu trece...doar cu <br /> sau <br> Cred ca delimitatorii astia ajuta oarecum la o buna organizare a lucrului programatorului. Sau cel putin asa vad eu pentru moment lucrurile. Nici eu nu intelesesem initial, pana la urma am folosit varianta de mai jos si a mers<?php print <<< delimitator Exemplu de sir care foloseste delimitatorul heredoc. delimitator; ?Si mie imi apare erroare la linia 15 dar am modificat in orice fel si tot asa iese am si dat copy paste de pe site si tot asa :|
Dragut site, cam rar resurse bune in romana pentru a invata php.Poate ar trebui sa intri mai in detali la tutoriale , sa spui ce e heredoc, ce e nowdoc , la care dintre ele merge interpolare (ce inseamna interpolare) etc.Oricum succes in continuare.
problema este ca dupa primul TXT este un spatiu. Eliminarea lui conduce la disparitia erorii.
# o instructiune echo poate primi mai multi parametriiecho "Afisez", " un text din ", 4, " bucati"; # o singura intstructiune print poate primi doar unul print "Afisez"; print " un text din "; print 4; print " bucati"; si totusi // concatenarea (legarea) sirurilor print 'Sir1' . " legat de " . 'Sir2'; // Sir1 legat de Sir2 Ce sa inteleg din astea? Prin concatenarea sirurilor se obtine in final un singur sir, care este tratat ca atare, de sine statator.Asta este important de stiut, pentru ca "print" asa cum ai spus si tu poate primi un singur parametru, iar prin concatenarea mai multor secvente se obtine exact una singura. La executarea instructiunii print interpretorul nu va "sti" ca textul ce trebuie afisat provine din 3 bucati diferite. La prima vedere e un pic derutant, dar pe masura ce vei parcurge restul capitolelor (Variabile, Functii) va fi mai usor de inteles. este enervant ca scrie totul pe o linie impreunat .. dar am rezolvat problema asta:print <<<TXT Text pe mai multe linii. <br> Delimitatorii pot avea orice nume: TXT, START, etc, cu urmatoarele conditii<br> - ambii delimitatori trebuie sa aiba acelasi nume<br> - inainte de primul delimitator se foloseste <<<<br> - delimitatorul de inceput nu trebuie sa fie urmat de spatiu sau alt caracter<br> - delimitatorul de final sa fie la inceputul liniei (fara spatii inainte)<br> - dupa delimitatorul final se pune punct si virgula ;<br> TXT; Multumesc. Foloseam incrementare cu tab-uri in notepad++ si nu mergea de nici un fel. Dupa informatiile lui Alexandru si ale tale a mers.Multumesc amdouara. Bun site, mai ales pentru incepatori. O zi buna.
Am incercat tot ce ati sugerat aici,dar tot eroare imi apare.Initial am folosit notepad++,apoi notepad dar degeaba.Sunt sigur ca gresesc ceva dar nu stiu ce.Astept ajutor!
Incearca altceva: fa click pe butonul Testeaza din dreapta casetei de cod (care va deschide pagina http://ideone.com/HBiZn ) si copiaza codul de acolo. E acelasi lucru, dar macar poti verifica daca iti functioneaza.
Am incercat si varianta ta si tot nu functioneaza.Iata ce-mi afiseaza(ghilimelele sunt puse de mine):"Parse error: syntax error, unexpected '<<' (T_SL) in D:INSTALARIPHPEasyPHP-DevServer-14.1VC11datalocalwebsiruri_de_caractere.php on line 6"
Vezi ca ai 2 caractere "mai mic" (<<) in loc de 3 (<<<). De asemenea, nu trebuie spatiu intre ele sau dupa ele.
Am tot ce trebuie,cel putin asa cred eu,si pentru edificare,as vrea sa-ti trimit o arhiva cu 3 fisiere:unul cu afisarea rezultatului in browser(Mozilla),altul cu codul_sursa din browser si ultimul cu ceea ce am scris eu in Notepad++.Intrebarea este,pe ce adresa de e-mail sa-ti trimit arhiva.
Adauga un comentariu la aceasta sectiune.
## Comentarii in PHP 8
Comentariile sunt portiuni de cod care nu se executa. Sunt folosite de programatori de regula pentru a da diverse explicatii despre logica aplicatiei, variabile si altele. Comentariile nu afecteaza executia unui script si pot fi sterse din cod fara nici un efect.
`` Nota: comentariile /* */ nu pot fi imbricate (inserate unele in altele). Exemplul urmator nu este un cod valid.
Este un lucru foarte util sa folositi comentarii in cod pentru a explica modul de rezolvare a problemei abordate, sau pentru a da detalii despre operatiile efectuate. Acestea va vor ajuta mai tarziu daca veti vrea sa modificati codul scris, sau vor oferi altor persoane informatii (valoroase) despre cum ati gandit codul respectiv.
imi place
misto tutorialul pt. cineva care stie deja html, css... in rest e pierdere de timp, eu unu stiu ceva html .. dar m-a ajutat cu -1 pana acum.. banuiesc ca nici tu nu cunosti php .. un fel de copy/paSTE de ALTUNDEVa>>>PIERDERE DE TIMP(pt. un incepator)..ms. oricum
Multumesc pentru comentarii, pozitive si negative. Tin sa mentionez ca acest ghid are un public tinta bine definit si este posibil ca tu sa nu faci parte din el. Reciteste cu atentie pagina de prezentare a site-ului, accesibila la http://php.punctsivirgula.ro/?despre si verifica daca te identifici cu ce scrie la sectiunea "Cui i se adreseaza acest site?". Daca nu te incadrezi, poti cauta oricand alte resurse de PHP (cateva locatii sunt indicate si pe pagina http://php.punctsivirgula.ro/start/).Daca ai nevoie de mai multe lamuriri, de orice fel, te invit sa imi trimiti un email la adresa specificata pe site. Toate cele bune! Alexandru, punctsivirgula.ro
Am inteles mult din acest tutorial!
Multumesc din suflet pentru tot ce este aici :D Mare ajutor din partea voastra!
Felicitari Alexandru pentru site.E de mare ajutor! Spor la treaba in continuare!
Eu pana acum am inteles tot, testarile sunt super, nu am trecut pana acum la pasul urmator, pana nu l-am inteles pe cel anterior. Tot ce trebuie facut e cand testezi sa rescrii personal codul, nu sa faci copy/paste..
Tagurile <%/* taguri in stilul asp, acceptarea lor de catre web-server depinde de configurare */ %> si <script language="php"> /* forma disponibila oricand, putin folosita */ </script> nu mai sunt valabile in versiunile mai noi de php.
Adauga un comentariu la aceasta sectiune.
## Variabile in PHP 13
### Ce sunt variabilele?
Variabilele sunt elemente ce au o anumita valoare la un moment dat. Valoare unei variabile se poate schimba de-a lungul executiei unui script.
Lucrul cu variabile in PHP este foarte usor. Nu trebuie specificat un tip pentru ele (de genul "variabila text", "variabila numerica") ci acesta este determinat in mod automat.
Variabilele sunt diferentiate de celelalte elemente ale limbajului prin caracterul $. Astfel, o variabila are forma $nume. Numele variabilei poate fi orice identificator valid (un text ce contine doar litere, cifre si underscore, fara spatii sau alte caractere; un identificator nu poate incepe cu o cifra).
### Declararea variabilelor
In PHP declararea variabilelor se realizeaza in acelasi timp cu initializarea lor (cu atribuirea unei valori de inceput). Atribuirea unei valori se realizeaza cu ajutorul instructiunii de atribuire ce are forma <nume variabila> = <valoare>;
```
// variabile numerice
$variabila = 1;
$numar = 0;
// variabile text (stringuri)
$text = "Salut";
$text2 = 'Ce faci?';
// variabile logice (boolean)
$stiuPHP = true;
$uitRepede = false;
// vector
$vectorGol = array();
$vectorS = array( 10,11,12,13 ); // vector simplu
// vector asociativ
$vector = array(
'luni' => 'Monday',
'marti' => 'Tuesday',
'miercuri' => 'Wednesday'
);
// definirea unei noi componente a unui vector asociativ
$vector[ 'joi' ] = 'Thursday';
// vector simplu definit ca asociativ, echivalentul lui $vectorS
$vectorS2 = array(
0 => 10,
1 => 11,
2 => 12,
3 => 13
);
```
Nota: un vector asociativ se diferentiaza de unul normal prin faptul ca are chei alfa-numerice (cuvinte).
Atentie: PHP face distinctie intre literele mari si mici. Astfel, cele 3 variabile de mai jos sunt TOTAL diferite:
```
$variabila = "negru";
$vaRiabilA = "alb";
$Variabila = 1;
```
### Atribuirea de valori
Se realizeaza la fel ca si declararea prin intermediul instructiunii de atribuire.
```
# atribuirea unei valori anume
$variabila = 2;
# copierea valorii de la o alta variabila
$text = $variabila;
# copierea valorii functioneaza si in cazul vectorilor
# in urma copierii cei 2 vectori vor fi identici
$vectorS = $vectorS2; # toate componentele lui $vectorS2 sunt copiate in $vectorS
# in cazul vectorilor se pot atribui valori separat fiecarei componente
$vectorS[ 0 ] = 100;
$vectorS[ 3 ] = 10;
$vector[ 'luni' ] = 'Lundi';
$vector[ 'joi' ] = 'Jeudi';
```
### Stergerea variabilelor
De obicei nu e nevoie ca variabilele sa fie sterse dupa ce au fost folosite. Totusi, lucrul aceste se poate face in urmatoarele moduri:
```
unset( $variabila );
$variabila = null;
```
### Afisarea variabilelor
Afisarea se realizeaza utilizand oricare din instructiunile de afisare:
```
$variabila = "Text";
$randNou = "
```
";
print $variabila;
print $randNou;
// echivalent cu cele doua instructiuni print
echo $variabila, $randNou;
In anumite situatii, in special atunci cand testati sau depanati codul PHP, puteti folosi doua instructiuni de afisare detaliata a continutului variabilelor: instructiunea print_r, utila in cazul vectorilor, pentru ca afiseaza componentele variabilei specificate, si instructiunea var_dump, ce poate fi folosita pentru orice tip de variabila. Aceasta din urma specifica tipul de data continut si lungimea sa.
```
$var1 = 123;
print "\n print_r: "; print_r($var1); // afiseaza 123
print "\n var_dump: "; var_dump($var1); // afiseaza int(123)
$var2 = "test";
print "\n print_r: "; print_r($var2); // afiseaza test
print "\n var_dump: "; var_dump($var2); // afiseaza string(4) "test"
$var3 = array('aaa', 'bbb', 'ccc');
print "\n print_r: "; print_r($var3); // afiseaza Array ( ... )
print "\n var_dump: "; var_dump($var3); // afiseaza array(3) { ... }
```
### Serializarea si deserializarea variabilelor
Serializarea unei variabile presupune transpunerea continutului ei intr-o forma ce poate fi stocata cu usurinta ca text simplu. Deserializarea este operatia inversa, prin care o variabila este recreata pe baza unui text provenit de la o serializare. Astfel, o variabila definita in PHP poate fi serializata si apoi scrisa intr-un fisier text sau salvata intr-o baza de date, avand posibilitatea de a o restaura oricand.
Serializarea se face folosind functia serialize care primeste ca parametru o variabila si returneaza un text. Operatia inversa se realiza cu ajutorul functiei unserialize.
```
$variabila = array("componenta1", 2, 3);
$ser = serialize($variabila);
echo $ser, "\n"; // afiseaza a:3:{i:0;s:11:"componenta1";i:1;i:2;i:2;i:3;}
$des = unserialize($ser);
print_r($des); // afiseaza vectorul initial
```
Serializarea este utila atunci cand o variabila trebuie sa fie stocata sau transmisa ca text catre un alt proces. Este necesar insa ca deserializarea sa fie facuta tot din PHP pentru a asigura recrearea corecta a continutului initial.
O alternativa la serializare si deserializare o constituie transformarea continutului variabilei intr-un format text standard, cum este JSON. Avantajul este ca reconstituirea continutului poate fi facuta de orice proces, nu doar de un script PHP.
care este diferenta dintre print si print_r ?
Diferenta intre print si print_r:http://codingforums.com/showthread.php?t=174016
As dori sa stiu daca unei variabile i se poate atribui ca valoare un fisier, de exemplu o poza; daca da, cum?
O variabila poate face referire la un fisier.Afirmatia "a avea ca valoare un fisier" poate avea 2 intelesuri - variabila poate fi o referinta catre fisier, asa cum am spus mai sus - variabila poate avea ca valoare continutul fisierului (de exemplu daca este un fisier text), ceea ce este iar posibil. Cat de curand va fi pusa pe site o lectie despre fisiere, in care vor fi descrise ambele aspecte enumerate.
Ai uitat virgula mobila (floating point)
cand dau click pe testeaza, la exemplele cu vectorii, NU INTELEG nimic din ce ai facut.....si nici scriptul nu imi iese. imi da niste erori...
Exemplele din aceasta lectie nu fac ceva concret, vizibil imediat - ele doar definesc niste variabile. Din acest motiv pare ca si cum nu s-ar intampla nimic.In lectiile urmatoare exemplele vor fi din ce in ce mai complete, cu mici procesari propriu-zise si rezultate vizibile. In mod normal, poti adauga instructiuni echo sau print in codul de pe aceasta pagina ca sa validezi ca variabilele au fost populate cu valorile respective.
Cum se poate realiza stergerea unui element in functie de valoare (nu de cheie) si invers? Zi faina! Mersi!
Ai putea cauta elementul si odata gasit il stergi dupa cheia lui. Asta e o varianta doar. Uita-te pe lista de functii cu array-uri ale PHP-ului si vezi ce poti folosi de acolo:http://php.net/manual/ro/ref.array.php in exemplul tau cu var_dump.la var_dump($var1); afiseaza int(123) si la var_dump($var2); afiseaza string(4) "test". De ce ? Am incercat si eu sa fac cateva exempl...si niciodata nu-mi afiseaza int. am pus chiar la variabila un sir de cifre ( m-am gandit ca int e pentru cifre si string e pentru litere ) dar tot string imi arata. Si ce inseamna int si string(x). banuiesc ca x e numarul de caractere din variabila nu ?
Am facut exact ca mai sus dar totusi nu mi le pune cum e in exemplu ci intr-un singur rand, cum le despart?
Nu inteleg ce a afisat serializarea, imi poate explica cineva ? Adica, de unde au aparut a, i, s..
Cum afisez o variabila ce contine un superscript (TM)? (in loc de TM imi apare ? intr-un rombulet)
## Ponturi de PHP 9
### Diferenta dintre ghilimele simple si duble la declararea variabilelor
Ghilimelele duble permite compilatorului sa "citeasca" variabilele ce apar in interiorul textului.
```
$today = date( 'd-m-Y' );
$text1 = "Azi e $today";
print $text1; // Azi e 17-07-2008
```
Astfel, textul cuprins intre ghilimele duble este procesat inainte de afisare: variabilele sunt cautate iar valoarea lor este afisata in locul numelui.In cazul apostrofului, textul este afisat neschimbat, iar variabilele nu sunt interpretate.
```
$today = date( 'd-m-Y' );
$text2 = 'Azi e $today';
print $text2; // Azi e $today
```
### Verificarea daca o variabila e definita
Se pot folosi 2 functii: isset si empty .
```
// definim o variabila, cealalta definire nu se executa
$var = 0;
// $var2 = 1;
var_dump( isset($var) ); // bool(true)
var_dump( isset($var2) ); // bool(false)
var_dump( empty($var1) ); // bool(true), pentru ca 0 este considerat nul
var_dump( empty($var2) ); // bool(true)
```
Atentie, isset verifica daca variabila a fost definita, pe cand empty verifica daca a fost definita si daca are o valoare considerata vida. O variabila este vida daca are urmatoarele valori:
* "" (text de lungime 0)
* 0 (cifra 0)
* "0" (textul "0")
* null
* false
* array() (vector gol)
### Operatorul {}
Pot exista situatii cand anumite variabile trebuie procesate mai inainte decat altele, sau mai inainte de procesarea intregii instructiuni. De exemplu:
```
$salut = array(
'dimineata' => 'Buna dimineata',
'pranz' => 'Buna ziua',
'seara' => 'Noapte buna'
);
print "$salut[ 'pranz' ], vizitatorule"; // eroare
```
Modul in care am scris variabila ($salut[ 'pranz' ]) este corect. Doar ca PHP-ul nu "stie" sa vada variabila noastra ca pe un element din vector, sa-i ia valoarea si apoi sa afiseze tot textul. Pentru a rezolva aceasta problema folosim acoladele:
```
print "{$salut[ 'pranz' ]}, vizitatorule"; // "Buna ziua, vizitatorule"
```
In acest caz spunem compilatorului sa interpreteze mai intai ce este intre acolade (elementul $salut[ 'pranz' ] din vector) si apoi sa afiseze tot textul.Ori de cate ori e nevoie ca o variabila sa fie interpretata cu prioritate, ea trebuie pusa intre acolade.
### Dubla indirectare ($$nume)
Limbajul PHP permite referirea la numele unei variabile printr-o alta variabila.
```
$masina = "Chevrolet";
$avion = "Boeing";
$tren = "TGV";
$obiect = 'masina';
print $$obiect; // Chevrolet
$obiect = 'tren';
print $$obiect; // TGV
```
Pentru a intelege ce se intampla, trebuie mentionat ca ultima linie poate fi scrisa astfel: `print ${$obiect};` In virtutea explicatiei de mai sus a operatorului {}, putem vedea usor ca PHP mai intai extrage valoarea variabilei $obiect ( care este: "masina" ) si apoi afiseaza variabila $masina (care are valoarea "Chevrolet"). Aveti o contradictie in acest text :-?"Nota: pentru valorile de mai sus, empty returneaza false" Cred ca valoarea lui "empty" in aceste cazuri e TRUE.
Ai perfecta dreptate. Am corectat.
Mie asta imi afiseaza dupa ce am dat copy paste. Nu ar trebui sa-mi afiseze data?$today = date( 'd-m-Y' ); $text1 = "Azi e $today"; print $text1;
Nu styu de ce nu iti merge dar mie imi merge bine!
Eroarea apare datorita faptului ca in exemplul dat, corect este "Azi e $today", deci folosirea ghilimelelor duble, si nu a celor simple, pentru ca altfel nu este permisa citirea variabilei $today.
Posibil sa afiseze acel lucru pentru ca nu s-a trecut prin interpretorul PHP.
De ce este necesar sa verificam daca o variabila este definita? Daca avem $var=0 exista posibilitatea ca php sa nu execute acest cod?
PHP va executa intotdeauna $var=0, doar ca in scripturile mai complexe pot exista mai multe modalitati prin care se pot defini variabile. In acest caz poate fi necesar o verificare inainte sa se foloseasca efectiv variabila, pentru a evita erori. Mai mult, functia isset se foloseste in mod uzual si pentru vectori (pentru a verifica daca o cheie exista).
isset verifica daca variabila a fost definita - nu e tocmai adevarat: variabila poate fi definita si sa nu contina nici o valoare atribuita. De fapt, verifica daca contine o anumita valoare, oricare ar fi ea.
## Constante 19
Constantele sunt entitati care nu isi schimba valoarea pe parcursul executiei. Sunt utile pentru a recunoaste usor valorile folosite in cod. De exemplu, e mai usor de inteles urmatorul cod, cand se foloseste o constanta decat daca s-ar folosi cifra 5
```
define( 'ZILE_LUCRATOARE', 5);
$zile = array( 'luni', 'marti', 'miercuri', 'joi', 'vineri', 'sambata', 'duminica' );
print 'Zilele lucratoare sunt: ';
for( $i = 0; $i < ZILE_LUCRATOARE; $i++ ) {
print $zile[ $i ] . ' ';
}
```
Asadar constantele de definesc folosind define:
```
define( 'NUME', "Flo" );
define( 'VARSTA', 10 );
define( "ADEVARAT", true );
```
Este recomandat ca numele constantelor sa fie scris cu majuscule (pentru a fi mai usor identificate) desi nu este obligatoriu. Atunci cand se folosesc, se specifica doar numele, fara ghilimele sau $:
```
print "Ma numesc " . NUME;
print " si am " . VARSTA . " ani";
```
Pentru a verifica daca o constanta este definita, se foloseste functia defined
```
if( defined( 'NUME' ) ) {
print "Ma numesc " . NUME;
}
if ( !defined( 'VARSTA2' ) ) {
define( 'VARSTA2' , 20 );
}
print " Am " . VARSTA . " ani";
```
Am observat ca la cel de-al doilea "if" din dexemplu pentru "defined" ai folosit "!" inainte de "defined". Am inteles ca are rol, pentru ca la compilare returneaza un mesaj- "Notice: Constant VARSTA already defined in..."- daca acel semn de exclamare nu mai este, dar care este rolul acestuia?
If(!defined...) inseamna "Daca nu este definita..." Semnul exclamarii este negatia.
aveti dreptate :)
. NUME, . VARSTA . , apare caracterul punct. Pentru ce? Constantele trebuie delimitate?
Punctul este un operator de concatenare pentru a lega doua stringuri,cum e in cazul de fata.
Care e diferenta intre isset si defined? cand se foloseste fiecare?
isset se foloseste cu variabile, pe cand defined este pentru constante.De exemplu, vei folosi: if( isset( $variabila ) ) { ... } if( defined( 'CONSTANTA' ) ) { ... }
Ceva mai devreme s-a scris despre faptul ca instructiunea "echo" poate primi MAI MULTI parametri, pe cand instructiunea "print" nu poate primi decat 1 SINGUR parametru. In cazul exemplului dat mai suis de dumneavoastra, cum a fost posibila folosirea mai multor parametri, si anume : " Am " . VARSTA . " ani". Este datorita "." punctului, care asigura o concatenare si atunci este vazut ca 1 singur parametru ... sau ?
Da, afirmatia este corecta.In cazul folosirii expresiei " Am " . VARSTA . " ani" la print, datorita punctului (care aici este operator de concatenare) toata constructia este considerata un singur parametru. Practic, se intampla urmatorul lucru: se concateneaza (leaga) cele 3 elemente rezultand o singura valoare (Am 2 ani) iar aceasta valoare unica este transmisa ca parametru la instructiunea "print" Daca ar fi fost folosita virgula, care nu este un operator ci doar un caracter de separare, atunci interpretorul ar fi considerat 3 parametrii ("Am ", VARSTA, " ani") si ar fi generat o eroare. <?php$alex=5 if( isset($buda) ) {print "Alex are 5 ani";} ?> De ce da aici eroare?Eu vreau sa afisez mesajul Alex are 5 ani doar daca ii definita variabila!
Am rezolvat problema cea de sus. Am uitat sa pun dupa 5 ; . Acuma am alta. Am observat ca imi afiseaza mesajul doar daca pun in fata la isset (!) . De ce?
Semnul exclamarii este folosit pentru a nega conditia.De exemplu: if( isset($var) ) - se traduce prin: "daca ESTE definita variabila $var" Pe cand: if( !isset($var) ) - se traduce prin: "daca NU ESTE definita variabila $var" In exemplul tau, ai definit doar variabila $alex. Variabila $buda nu este definita, deci conditia ta - isset($buda) - este falsa si nu va afisa nimic. Daca dupa $alex = 5; pui si $buda = 1; atunci vei defini acea variabila si iti va afisa mesajul.
de ce apare i++?
$i++ din cadrul "for" inseamna sa-l creasca pe $i cu o unitate la fiecare pas.Mai multe detalii despre $i++ gasesti la http://php.punctsivirgula.ro/operatori/ Mai multe detalii despre for gasesti la http://php.punctsivirgula.ro/repetitive/
Variabilele si Constantele sunt acelasi lucru , deci se pot folosii cu acelasi scop ... intreb pentru ca am inteles mai mult constantele decat variabilele
Pana la un anumit punct, variabilele si constantele fac acelasi lucru: contin o informatie.Diferenta vine atunci cand acea informatie se schimba. Variabila va putea fi actualizata (va putea sa faca referire la o noua valoare), pe cand constanta va ramane aceeasi. De exemplu, presupunem ca avem un script care proceseaza niste fisiere, unul cate unul. Daca vrem sa numaram cate fisiere am procesat la un moment dat, folosim o variabila care va creste cu o unitate la fiecare fisier. <?php $procesate = 0; /* repeta cat timp exista fisiere { */ proceseaza( $fileName ) $procesate = $procesate + 1 /* } sfarsit */ echo "Fisiere procesate: $procesate"; ?> In acest caz n-am fi putut folosi o constanta, intrucat valoarea setata cu 'define' nu va putea fi modificata (este de fapt o eroare de PHP sa incerci sa schimbi o constanta). In contextul exemplului de mai sus, am putea folosi o constanta pentru a pastra numarul total de fisiere existente (care vor fi procesate), intrucat acest numar ramane constant pe parcursul prelucrarii. Exemplu: <?php define( 'FISIERE_DE_PROCESAT', 100 ); ?si dupa ce inveti toate smecheriile astea cum le legi pentru a crea un web? din imaginatie sau au o ordine? deci, nu mai stiu de cap, de o saptamana am paralizat langa tutoriale si nu s a prins nimic de mine... help :)
Ce este gresit aici?<?php define('LOC',"la piata"); define('CE',"pere"); define('CE2',"prune") print"El este ".LOC." si cumpara ".CE //print"n" if(!defined('CE2')){define('CE2',"la prune"); } print"El este ".LOC."si cumpara ".CE2 print"n" if(!defined('LOC2')){define('LOC2',"la mercerie"); } if(!defined('CUMP')){define('CUMP',"ata"); } print"n" print"El este ".LOC2."si cumpara ".CUMP ?Nu ai pus punct-si-virgula la finalul instructiunilor de print.
Date: 2013-10-24
Categories:
Tags:
## Expresii in PHP 0
Expresiile sunt constructii ale limbajului PHP care au o valoare. O expresie poate fi un sir de caractere, un numar, o variabila sau chiar o constructie mai complexa (cum ar fi un apel de functie). Ideea de baza este ca orice poate fi evaluat la o valoare este considerat a fi expresie.
Limbajul PHP este construit in jurul expresiilor, iar acest concept, desi ignorat de multi programatori, este foarte important in intelegerea corecta a modului in care interpretorul PHP evalueaza si executa secventele de cod.
Cele mai simple expresii in PHP sunt, asa cum am amintit, numerele (denumite si number literals) sau sirurile de caractere (string literals). Acestea au o valoare ce poate fi determinata in orice moment, iar aceasta valoare poate fi folosita in operatii (cel mai adesea, la atribuiri). In urma unei atribuiri, o variabila va prelua valoarea expresiei atribuite, devenind ea insasi o expresie. Asadar, avand o instructiune de atribuire de forma
`$a = 1;` 1 este o expresie (number literal) si $a este la randul ei o expresie (variabila).
Mai mult decat atat, in PHP instructiunea de atribuire este in sine o expresie, in sensul ca returneaza o valoare; astfel $a = 1 va returna valoarea expresiei atribuite (1, in acest exemplu). In acest context, putem scrie
```
// atribuie lui $a valoarea 1, apoi printeaza rezultatul expresiei $a = 1, adica 1
print ($a = 1);
```
Sau chiar
```
// atribuie lui $a valoarea 1, apoi atribuie lui $b rezultatul expresiei, adica 1
$b = ($a = 1);
```
De mentionat este ca expresiile sunt evaluate de la dreapta la stanga. In exemplul de mai sus, mai intai se executa si evalueaza expresia $a = 1 iar apoi se executa atribuirea $b = ... in care operandul din dreapta este inlocuit cu valoarea efectiva obtinuta.
Nu doar atribuirile sunt expresii. Apelurile de functii (predefinite sau definite de programator) sunt considerate expresii si ele, intrucat returneaza o valoare. De asemenea, asocierea mai multor expresii, folosind operatori, genereaza o noua expresie.
Un alt lucru important referitor la expresii este ca valoarea lor se poate modifica in functie de context. Mai exact, se modifica tipul de date de care apartine expresia (din string in numeric, din numeric in logic, etc). Astfel, daca intr-o operatie este necesar un anumit tip de date, iar expresia folosita are alt tip, atunci valoarea este convertita in mod automat. Asta poate fi un lucru bun, sau poate introduce erori in cod; de aceea este important sa se acorde o atentie sporita valorilor expresiilor, in special atunci cand se combina tipuri de date diferite.
Expresiile sunt fundamentul limbajului PHP si aproape orice este o expresie. Ele apar peste tot in codul sursa si pot fi folosite in atribuiri, ca parametrii ai functiilor sau pentru specificarea conditiilor din structurile de control. In cele ce urmeaza vor fi prezentati operatorii limbajului PHP care pot fi aplicati unor expresii si care genereaza (prin compunere) noi expresii.
## Operatori in PHP 10
Operatorii sunt elemente de limbaj ce se aplica expresiilor si dau nastere unor noi expresii. Exista mai multe tipuri de operatori, cu sintaxa si roluri diferite. De retinut este ca toti operatorii vor determina conversia expresiilor componente atunci cand acestea au tipuri diferite de date. Regula de conversie difera de la un tip de date la altul si de la un operator la altul.
Mai jos sunt detaliati operatorii disponibili in PHP.
### Operatori de comparare
Operatorii de comparare sunt folositi de cele mai multe ori in cadrul instructiunii if pentru a exprima conditia ce trebuie verificata.
```
$a == $b // egal
$a === $b // identic (aceeasi valoare si acelasi tip)
$a != $b // diferit
$a <> $b // diferit
$a !== $b // ne-identic
$a < $b // strict mai mic
$a <= $b // mai mic sau egal
$a > $b // strict mai mare
$a >= $b // mai mare sau egal
```
Exemple:
```
4 ) print "\$a este mai mare decat 4";
?>
```
Atentie! Daca se compara un string (text) cu un numar, stringul este convertit la un numar. Exemplu:
```
"text" == 0 // echivalent 0 == 0, evalueaza la true
```
Daca se compara 2 stringuri ce contin numere, acestea sunt mai intai convertite la numerele pe care le reprezinta si apoi are loc comparatia. Astfel, 2 texte care sunt diferite ar putea fi considerate egale. Exemplu:
```
"1" == "01" // echivalent 1 == 1, evalueaza la true
"1" == "1e0" // echivalent 1 == 1, evalueaza la true
```
Nota: "01" si "1e0" sunt notatii matematice diferite pentru numarul 1. Diferenta intre operatorii == si =Operatorul == este diferit de operatorul de atribuire = si nu trebuie confundati. Dublu-egal (==) este folosit pentru comparare iar egal simplu (=) - pentru atribuirea de valori unei variabile.
```
# $a = 5 este o atribuire.
var_dump( $a = 5 ); // 5
# instructiunea print afiseaza rezultatul atribuirii (rezultatul unei atribuiri este
# intotdeauna egal cu valoarea atribuita)
# $a == 1 este o verificare. Instructiunea afiseaza rezultatul verificarii.
var_dump( $a == 1 ); // false
```
In timp ce atribuirile se pot folosi atat de sine statator ($a = 1;), cat si ca expresii in cadrul instructiunilor (print $a = 1; return $a = 1;) operatorii == sau === sunt folositi doar in cadrul instructiunilor.
```
# corect
$a = 10; # atribuire de sine statatoare
var_dump( $a = 10 ); # atribuire + afisarea rezultatului atribuirii
# incorect - nu genereaza erori dar aceasta constructie este inutila.
$a == 1; # nu se face nimic cu rezultatul comparatiei
# corect
var_dump( $a == 1 ); # se afiseaza rezultatul comparatiei
var_dump( $a === '1' ); # se afiseaza rezultatul comparatiei exacte
if( $a == 1 ) echo 'egal'; # se face o verificare
```
### Operatorul ternar
PHP ofera un operator care are 3 termeni si a carui evaluare returneaza o valoare. Sintaxa lui este urmatoarea:
> (conditie ? adevarat : fals )
De remarcat ca adevarat, fals si conditie nu sunt instructiuni ci expresii (variabile, constante, stringuri, etc) iar operatorul returneaza o valoare si nu o variabila.
```
print ( 1 == 2 ? 'egal' : 'ne-egal' ); // afiseaza ne-egal
$a = ( 1 == 2 ? 'egal' : 'ne-egal' ); // $a va avea valoarea ne-egal
```
### Operatori de incrementare/decrementare
Incrementare inseamna cresterea valorii, de obicei cu o unitate, iar decrementarea este operatia inversa. PHP ofera (ca si C/C++) posibilitatea ca incrementarea/decrementarea sa se faca printr-un operator, fara sa fie nevoie de o instructiune separata. Spre exemplu:
```
$a = 1; // initializare
// pentru a-l creste pe $a cu o unitate, normal am scrie:
$a = $a + 1;
// folosind operatorul de incrementare scriem:
$a++;
// sau
++$a;
// pentru a afisa noua valoare, putem aplica operatorul direct din instructiunea print:
print ++$a;
```
Dupa cum se poate observa, putem scrie $a++ si ++$a. Diferenta este ca atunci cand ++ apare inainte de variabila, PHP face mai intai incrementarea si apoi returneaza noua valoare. Cand ++ apare dupa, se returneaza valoarea actuala (ne-incrementata) si apoi se creste cu o unitate. Exemplu:
```
$a = 1;
print $a++; // afiseaza 1 - mai intai se afiseaza valoarea curenta, dupa care $a devine 2;
print $a; // afiseaza 2 - $a are valoarea 2 dupa incrementare;
$a = 1
print ++$a; // afiseaza 2 - mai intai $a creste cu o unitate, apoi este afisata noua valoare
print $a; // afiseaza 2 - $a are valoarea 2;
```
Nota: aceleasi observatii (legate de pozitie) se aplica si pentru operatorul de decrementare.
```
$a = 2;
print $a--; // afiseaza 2 - mai intai se afiseaza valoarea curenta, dupa care $a devine 1;
print $a; // afiseaza 1 - $a are valoarea 1 dupa decrementare;
$a = 2
print --$a; // afiseaza 1 - mai intai $a scade cu o unitate, apoi este afisata noua valoare
print $a; // afiseaza 1 - $a are valoarea 1;
```
### Operatori de atribuire
```
$a = 1; // atribuire simpla
$a += 4; // echivalent cu $a = $a + 4; $a are valoarea 5 acum
$a -= 1; // echivalent cu $a = $a - 1;
$a *= 2; // echivalent cu $a = $a * 2;
$a /= 3; // echivalent cu $a = $a / 3;
$a %= 2; // echivalent cu $a = $a % 2; restul impartirii lui $a la 2
$a = &$b; /* $a este o referinta la $b, adica ambele variabile fac referire
la aceeasi entitate; daca $a se schimba, se va schimba si $b.
Altfel spus, $a este un alias pentru $b */
$s = "Salut"; // atribuire simpla
$s .= " straine!"; // echivalent $s = $s . " straine!";
```
### Operatori pentru siruri de caractere
In aceasta categorie sunt inclusi 2 operatori ".=" (operator de atribuire prin concatenare - vezi mai sus) si "."Punctul (.) este operatorul de concatenare (legare) stringuri.
```
print "Text1" . " legat de " . "Text2"; // afiseaza Text1 legat de Text2
$a = "Eu am";
print $a . " mere"; // afiseaza Eu am mere;
```
### Operatori de control al erorilor @
Operatorul @ este folosit pentru a suprima erorile sau avertismentele produse de PHP.
```
// $nedefinit = 1; - nu se executa, variabila nu e definita
print $nedefinit; // Notice: Undefined variable: nedefinit in file.php on line 120
@print $nedefinit; // nu va genera nici un avertisment / notificare
include( "inexistent.php" ); // Warning: include(nedefinit) failed to open...
@include( "inexistent.php" ); // nu afiseaza nici un avertisment
```
### Operatorul de executie ` `
Operatorul de executie permite rularea unor aplicatii sau comenzi ale sistemului de operare direct din PHP. Rezultatul executiei este capturat de script si poate fi prelucrat sau afisat. Operatorul ` ` este echivalent cu functia shell_exec.
```
# in ambele situatii de mai jos este afisat continutul directrului curent
$output = `ls -al`;
echo "
```
> $output
";
$output = shell_exec('ls -al');
echo " > $output
";
### Alti operatori
PHP mai dispune de urmatoarele tipuri de operatori:
* Operatori aritmetici: +, -, *, etc
* Operatori pe biti: &, |, ^, ~, <<, >>
* Operatori logici: and, or, xor, &&, ||
* Operatori de tip: instanceof
* Operatori pentru vectori: asemanatori celor de comparare, doar ca se aplica vectorilor, operatorul de uniune (+) leaga doi sau mai multi vectori
Nu am inteles deloc operatorul de executie. Ati putea da alt exemplu poate mai simplu si mai comun? Ce inseamna 'ls-al'?
Operatorul incearca sa ruleze comanda specificata (ca si cum ai da Start->Run din Windows). In exemplul meu, "ls -al" este o comanda de listare a continutului folderului curent (pe sistemele de tip UNIX/Linux), dar operatorul poate rula orice comanda/aplicatie valida pe sistemul de operare pe care il folosesti.Exemple: $output = `dir`; $output = `date.exe /T`; $output = `c:\scripts\ceva.bat`; $output = `php.exe -c some-script.php`; $output = `c:\Windows\notepad.exe`; Ca idee, vor fi destul de rare situatiile cand vei avea nevoie sa executi ceva pe server dintr-un script de PHP, dar e bine de stiut ca exista posibilitatea asta.
Multumesc, acum am inteles ce face
V as ruga sa ma ajutati cu doua raspunsuri:1: Ce trebuei sa invat mai exact pentru a crea o retea de socializare 2: Ce trebuie sa invat mai exact pentru a crea un program de contabilitate == Va multumesc si astept un raspuns Raspuns scurt, pentru ambele intrebari: tot ce e pe acest site (Formulare, Headere, Cookie-uri, Sesiuni, Fisiere) + alte lucruri care nu sunt inca pe site (Baze de date, Clase, Template-uri).Raspuns lung, pentru ambele intrebari: orice problema vrei sa rezolvi (ex. sa creezi o retea de socializare) trebuie sa o "spargi" in activitati concrete pe care poti sa le evaluezi cu usurinta. De exemplu, la o retea de socializare iti trebuie: pagina de login, pagina de afisare a news-urilor, pagina de postare, etc. Pagina de login o poti "sparge" in: login, creare cont, recuperare parola, etc. Daca o functionalitate a site-ului face mai mult de un lucru, trebuie impartit in activitati mai mici si mai specifice. In final, vei obtine o lista (luuunga) de activitati foarte specifice pe care le vei putea identifica pe site-ul asta, fie in cadrul lectiilor, fie printre exemple (de ex. formular de login). Astfel iti va fi mai usor sa te apuci efectiv sa implementezi ceva si vei sti singur ce trebuie sa inveti. Asadar, raspunsul la intrebarile tale ti-l vei putea da tu singur (si doar tu, nimeni altcineva). Ai mai putea face site-ul tau din scripturi luate de pe net, sau din aplicatii gata facute pe care, daca esti incepator, nu le vei putea intelege si in final va iesi o mare oala de spaghete. Unii reusesc sa se descurce cu asa ceva, altii nu. Iti recomand totusi sa iti scrii singur propriile scripturi, mai ales daca vrei sa inveti cu adevarat programarea web. Succes!
dupa un an am ajus tot la voi =)))) moama ce mai raspuns, iti foarte multumesc de amabilitate, esti un dragutz, visam pe vremea aia, ahaaaaaa.. e grele rau php asta, mormane de informatii...
$gasit = false;while(!$gasit) este = $gasit == false conditia 2 o inteleg usor, daca gasit este false da'i drumul la cod dar !$gasit m am nenorocit )))) am incercat sa imi explic cu: $a = 5; $b = 5; if($a != $b){ echo 'true'; }else{ echo 'false'; } dar tot degaba ca acolo e altfel... ajutorrrrrrrrrrrrrrrr
dupa o ora mi a picat fisa, acum nu stiu daca este o fisa buna dar am luat-o asa, boolean = true false, so, semnul ! a transformat false in true, a inceput codul, cand false a devenit true semnul ! i a dat dat lui true valoare false si drept urmare scriptul s-a oprit... uite ce fac eu de craciun ))))) buna pagina voastra, daca era si video ma culcap pe monitor, multumesc de raspuns bosule ;p mai aduci si tu fericire la oamenii disperati
cum a explicat site-ul aste incrementarea si diferenta intre incrementare dupa si inainte nu am vazut mai buna. Este ceva simplu ce nu intelesesem inainte.
Multumim pentru documentatia sigura, corecta, bine explicata si pentru munca depusa.
## Siruri de caractere 3
Sirurile de caractere sunt bucati de text, bine delimitate, folosite in codul-sursa pentru diferite scopuri. PHP prezinta particularitati in modul in care sunt folosite string-urile, dupa cum se poate vedea in exemplele de mai jos.
Important! Sirurile de caractere sunt expresii (entitati ce au si returneaza o valoare). Asadar, un string poate fi folosit, pe langa, afisare, in atribuiri, la verificari, etc. In exemplele ce urmeaza s-a optat pentru afisarea sirurilor.
### Siruri de caractere delimitate de ghilimele duble
Sirurile delimitate prin ghilimele duble au particularitatea ca pot interpreta variabilele si caracterele speciale din interiorul lor. Astfel, la evaluarea textului, variabilele existente in interiorul lui sunt inlocuite cu valoarea lor iar rezultatul final este returnat.
O alta particularitate a acestor siruri o reprezinta folosirea backslash-ului (caracterul \). Acesta are o functie speciala de marcare a anumitor caractere care nu pot fi incluse in mod normal intr-un text (din cauza ca sunt ele insele caractere speciale). Din acest motiv backslash-ul poarta numele de escape character.
### Siruri de caractere delimitate de ghilimele simple
Sirurile delimitate prin ghilimele simple nu permit interpretarea variabilelor continute si nici a caracterelor speciale cu exceptia \' si \\
### Siruri de caractere delimitate cu notatia speciala <<<
Aceste siruri de caractere au avantajul ca nu necesita marcarea (escaparea) delimitatorilor prin \' sau \". In rest, aceste stringuri sunt tratate in acelasi mod ca cele delimitate de ghilimele duble, in sensul ca permit interpretarea variabilelor si a altor caractere speciale.
```
Delimitatorii pot avea orice nume: TXT, START, etc, cu urmatoarele conditii:
```
- ambii delimitatori (de inceput si sfarsit) trebuie sa aiba acelasi nume
- inainte de primul delimitator se foloseste <<<
- delimitatorul de inceput nu trebuie sa fie urmat de spatiu sau alt caracter (important!)
- delimitatorul de final sa fie la inceputul liniei (fara spatii inainte)
- dupa delimitatorul final se pune punct si virgula ;
- pot contine ghilimele " sau apostrof ' fara nevoia de a le escapa
- permit interpretarea variabilelor si afisarea $a
TXT;
?In versiunile mai noi de PHP (de la 5.3.0) a fost introdusa posibilitatea de a defini siruri prin notatia speciala <<< fara a interpreta variabilele continute. Exemplul de mai jos foloseste un sir astfel definit (vezi diferenta la delimitatorul de inceput).
```
Asadar $a si \n raman asa cum sunt.
TXT;
?>
```
in cazul versiunii de dupa 5.3.0 in cazul scrierii cu "<<<" cum as putea face sa.mi execute comanda $a fara a strica intregul text...ma refer la varianta particulara..si nu la modul general sa.mi strice tot textul...
Eu folosesc versiunea 5.5.9 iar in cazul scrierii cu "<<<" trebuie sa escapez ghilimelele si apostroful ,nu imi citeste comanda <br /> ,si primesc o eroare de la variabila $a si o eroare de la inceputul textului!
pe linia: print <<<'TXT' va trebui sa stergi cele doua apostrofuri si vei vedea conversia.
## Operatii cu siruri de caractere 4
Mai jos sunt prezentate operatiile uzuale cu siruri de caractere si functiile oferite de limbajul PHP pentru realizarea lor.
### Lungimea sirului
```
$s = "acesta este un text";
$sir = "stiu PHP stiu HTML stiu CSS";
# ce lungime are sirul? (numarul de caractere)
print strlen( $s ); // 19
```
### Cautarea unei secvente
```
# verific daca un cuvant sau text (in cazul de fata cuvantul 'PHP') apare in
# sirul exprimat prin variabila $sir
if( strstr( $sir, 'PHP' ) !== false ) print 'gasit';
else print "nu am gasit";
# pentru a nu tine cont de litere mari/mici se foloseste stristr
if( stristr( $sir, 'phP' ) !== false ) print 'gasit';
```
### Afisarea unui subsir
```
# afisez o sectiune din sir
print substr( $sir, 0, 4); // stiu
print substr( $sir, 5 ); // PHP stiu HTML stiu CSS
print substr( $sir, 5, -3 ); // PHP stiu HTML stiu
print substr( $sir, -3 ); // CSS
# returnez doar un caracter din string
print $sir{5}; // P
print $sir{ strlen($sir)-1 }; // S
```
### Transformarea sirului
```
# inlocuirea unor secvente
print str_replace( "stiu", "invat", $sir); // invat PHP invat HTML invat CSS
# schimb tipul literelor (mari, mici)
print strtoupper( $s ); // ACESTA ESTE UN TEXT
print strtolower( $sir ); // stiu php stiu html stiu css
print ucfirst( $s ); // Acesta este un text
print ucwords( $s ); // Acesta Este Un Text
# sterg spatiile de la inceput si sfarsit: trim, ltrim, rtrim
print trim(' ok '); // ok
# caractere "enter" transformate in
```
print nl2br( "acesta e afisat pe \n 2 linii" ); // acesta e afisat pe 2 linii
### Impartirea (spargerea) sirului
```
# impart sirul dupa un caracter, cuvant sau un alt sir
$output1 = explode( "stiu ", $sir ); // impart dupa stiu
```
/*
Array (
[0] => PHP
[1] => HTML
[2] => CSS
)
*/
# impart sirul dupa o expresie regulata (regex)
$output2 = preg_split( '/ /', $s ); // impart dupa spatiu
/*
Array (
[0] => acesta
[1] => este
[2] => un
[3] => text
)
*/
# operatia inversa impartirii unui sir:
$a = implode( 'invat ', $output1 ); // invat PHP invat HTML invat CSS
$b = join( '-', $output2 ); // acesta-este-un-text
Nota: implode si join sunt echivalente (nu exista nici o diferenta intre ele), pe cand explode si preg_split sunt diferite.
### Concatenarea (legarea) sirurilor
```
print 'Text 1' . " legat de " . 'text 2' . "\n"; // Text 1 legat de text 2
// se pot concatena siruri rezultate din alte functii sau din variabile
print ucfirst($sir) . '!!! ' . $s;
```
Nota: prin concatenarea sirurilor se obtine in final un singur sir, care este tratat ca atare, de sine statator. Altfel spus, prin legarea mai multor siruri se obtine o singura entitate (o singura expresie). Aceasta poate fi transmisa ca parametru unor functii sau instructiuni precum print care accepta un singur argument.
### Interpretarea sirului
```
# parsez un Query String
$str = "first=value&arr[]=foo+bar&arr[]=baz";
parse_str($str);
print $first; // value
print $arr[0]; // foo bar
print $arr[1]; // baz
parse_str($str, $output);
print $output['first']; // value
print $output['arr'][0]; // foo bar
print $output['arr'][1]; // baz
```
### Masuri de siguranta
In cazul in care textul provine din surse nesigure (cum ar fi un formular de comentarii), atunci este indicat sa fie "sterilizat" (sanitized), prin eliminarea elementelor ce pot fi daunatoare (tag-uri HTML, caractere speciale, etc).
```
print addslashes( "Baiatu' ia vino-ncoa'!" ); # Baiatu\' ia vino-ncoa\'!
# functia inversa este stripslashes();
print htmlspecialchars("Test", ENT_QUOTES);
# afiseaza <a href='test'>Test</a>
# functia inversa este htmlspecialchars_decode()
print strip_tags( "
```
E bold
" ); // E bold
print strip_tags( "
E bold
", '' ); // E bold
Ce reprezinta "first=value&arr[]=foo+bar&arr[]=baz"? Mecanic imi pot da seama ce anume se afiseaza, dar nu inteleg principiu, cred ca imi scapa ceva de la inceput. Puteti fi un pic mai expliciti la partea cu interpretarea sirului?
Aceea reprezentare este des intalnita la paginile web, fiind formatul in care sunt transmise datele prin formulare.Mai general vorbind, este o modalitate de a reprezenta niste date: perechi de cheie=valoare, separate prin caracterul &. Alte formate de reprezentare uzuale sunt XML, CSV, JSON. Doar ca PHP are suport nativ pentru acest format si ofera functia parse_str pentru a decodifica datele din acest format. Ca idee, ai putea folosi reprezentarea asta pentru a salva niste date intr-un fisier, pe care sa le citesti mai tarziu si sa le parsezi folosind parse_str. Este mult mai convenabil si mai rapid decat sa le scri intr-un XML sau alt format de fisiere.
informatiile aici sunt cam vagi dupa parerea mea .. dar ai putea sa le explici putin mai bine la operatii cu siruri de caractere..
pare destul de bine structurat, dar pentru persoane (ca mine) care nu stiu o iota de programare, e foarte, foarte dificil sa inveti. ma lovesc de comenzi/functii care nu au fost explicate ce sunt si unde se folosesc. daca stiti vre-un tutorial in care te ia de la clasa 0, va rog sa mi-l recomandati.
Date: 2012-08-10
Categories:
Tags:
## Structuri de control in PHP 0
In limbajul PHP, la fel ca in oricare alt limbaj de programare, instructiunile cuprinse intr-o secventa de cod-sursa se executa succesiv, una dupa alta. Exista insa anumite instructiuni care modifica ordinea de executie a liniilor de cod. Din acest motiv ele poarta numele de structuri de control, intrucat ele controleaza fluxul de executie.
Structurile de control din PHP sunt: structura alternativa (instructiunea if cu variantele ei), structura de selectie multipla (switch), structuri repetitive (for, while, do... while, foreach), structuri de intrerupere a fluxului (break, continue, return), structura de salt neconditionat (goto), directivele de includere (include, require) si directiva declare.
O parte din acestea sunt rar folosite in practica (cum ar fi goto sau declare), motiv pentru care nu le vom trata in acest ghid. Restul vor fi prezentate succint in cele ce urmeaza sau in lectia urmatoare.
## Structura alternativa if 4
Poate cea mai des intalnita structura de control este instructiunea if. Aceasta este folosita pentru a executa o secventa de cod in functie de valoarea de adevar a unei conditii. Sintaxa este prezentata mai jos:
Aceasta forma permite executarea unei instructiuni numai daca este indeplinita o conditie. Conditia poate fi orice expresie de genul "2 mai mic decat 3", "variabila $a este definita", s.a. tradusa in limbajul PHP.
Instructiunea de executat poate fi simpla (o singura instructiune) sau un bloc (mai multe instructiuni delimitate de acolade). Regula este ca atunci cand este nevoie sa se execute mai mult de o instructiune, trebuie obligatoriu creat un bloc (trebuie folosite acoladele).
Exemplu:
```
";
if( 3 > 1 ) {
print "3 e mai mare ca 1 \n";
print "
```
";
}
?>Atentie! Daca nu se foloseste un bloc in cadrul instructiunii if, atunci doar prima instructiune dintre cele existente se executa sau nu, in urma evaluarii conditiei, pe cand celelalte se vor executa intotdeauna, indiferent de rezultatul verificarii. De exemplu:
```
\n";
if( 3 < 2 )
print "Da, 3 < 2 - prima linie
```
";
print "Da, 3 < 2 - a doua linie ";
?>Codul de mai sus va afisa:
```
Este 3 mai mic decat 2?
```
Da, 3 < 2 - a doua linie Pentru ca nu am inclus cele 2 instructiuni print intr-un bloc, a doua instructiune s-a executat indiferent de valoarea de adevar a conditiei. Practic, doar prima instructiune print a fost considerata ca facand parte din structura if. Codul corect (din punct de vedere logic) ar fi urmatorul, care contine un bloc de instructiuni:
";
print "Da, 3 < 2 - a doua linie ";
}
?### Instructiunea if - else
De multe ori este nevoie sa se specifice si o operatie ce trebuie efectuata daca nu este indeplinita o conditie. In acest caz se foloseste if - else.
Aceasta forma permite executarea unei instructiuni atunci cand se indeplineste conditia sau executarea alteia diferite in caz contrar. Aceleasi observatii ca si mai sus se aplica.
";
print "Da, 3 < 2 - a doua linie ";
} else {
print "Nu, 3 > 2 - prima linia ";
print "Nu, 3 > 2 - a doua linie ";
}
?Salut! ce nu inteleg eu in exemplele de mai sus si nu numai: care e rolul "< br />" in limbaj si de ce nu dispare in output, aparand tot ca "< br />"? arata uratdupa mine si nu-i vad rostul...
Rolul lui <br /> este explicat in pagina http://php.punctsivirgula.ro/basics/ la sectiunea "Ce afiseaza PHP?". Iti recomand sa recitesti pentru ca este important sa intelegi diferenta intre textul rezultat dintr-un script PHP si textul afisat in browser.Ca idee, rezultatul va aparea diferit daca este vizualizat intr-un browser (ca pagina HTML).
asdasd test
Salut pana acum am inteles
## Instructiunea de selectie multipla 1
O instructiune ce se aseamana cu if este switch, o structura ce permite executarea uneia sau mai multor linii de cod in functie de valoarea unei expresii. Instructiunea switch este utila in cazurile in care este nevoie sa se verifice mai multe valori posibile ale unei expresii.
De mentionat este faptul ca, daca nu se foloseste instructiunea de control break, in momentul in care o ramura este selectata, se vor executa atat instructiunile aferente acelei ramuri cat si instructiunile ramurilor urmatoare. Acest lucru poate sa nu fie dorit intotdeauna, de aceea este important ca break sa fie folosita ori de cate ori fluxul de executie al blocului switch trebuie intrerupt.
Exemplul de mai jos va executa instructiunea print 2; si toate celelalte instructiuni print care urmeaza, intrucat fluxul nu este intrerupt prin break.
cum fac break dupa executarea unui "case"?
## Includerea altor scripturi 0
PHP permite referirea unor fisiere PHP externe si includerea lor in fluxul curent de executie folosind functiile include sau require. Important de mentionat, in momentul in care un script este inclus cu una din cele 2 functii, el este imediat si executat (interpretat). Practic, fluxul de executie al scriptului initial (cel in care apare instructiunea include sau require) este intrerupt temporar pentru a executa scriptul inclus, urmand ca dupa finalizarea executiei, fluxul initial sa fie reluat.
Spre exemplu, daca avem doua fisiere ca mai jos, la accesarea scriptului script.php vor fi afisate ambele mesaje - asta pentru ca in urma instructiunii include scriptul config.php este imediat interpretat.
Prin functionalitatea lor, instructiunile include si require devin utile la organizarea codului PHP si la mentinerea unei structuri simple de executie prin folosirea mai multor fisiere pentru o singura activitate. Astfel, putem sa separam codul in scripturi PHP de sine statatoare ce vor fi "puse cap la cap" prin apeluri de include sau require.
Spre exemplu, putem avea niste declaratii de date intr-un script numit config.php si codul propriu-zis intr-un altul, pagina.php. Putem apoi include fisierul de configurari (config.php) direct din scriptul principal (pagina.php), reusind astfel sa mentinem codul aerisit.
### Include sau require?
Functia require face acelasi lucru ca si include dar exista o diferenta intre cele doua: daca fisierul solicitat pentru includere nu exista, include va returna un avertisment, continuand executia, pe cand require va returna o eroare iar executia codului va fi intrerupta.
Cele doua functii au fiecare cate o varianta: include_once respectiv require_once. Aceste forme, dupa cum si numele o spune, includ fisierul specificat o singura data, astfel incat, daca fisierul solicitat a fost deja inclus la o noua apelare a include_once sau require_once acesta nu va fi inclus a doua oara. Aceste forme ale functiilor sunt utile atunci cand fisierle incluse contin declaratii ce trebuie sa fie facute o singura data.
Date: 2010-07-26
Categories:
Tags:
### Operatii cu variabile. Structuri repetitive
O structura repetitiva este o secventa de cod ce permite realizarea repetata a aceleiasi operatii de un anumit numar de ori. O structura repetitiva este definita de 2 elemente: operatia care este executata si conditia de oprire a executiei. In unele cazuri se cunoaste si numarul de executii (sau iteratii).
Iata cateva aplicatii care implementeaza anumite tipuri de structuri in PHP:
## Structura repetitiva for 7
Se foloseste atunci cand se cunoaste dinainte numarul de repetitii (numarul de pasi ce se vor executa). Are urmatoarea sintaxa:
> for( [instructiune1] ; [conditie] ; [instructiune2] ) { [instructiune3] }
unde:
* instructiune1 este o instructiune de executat la inceput
* conditie este o expresie care daca este evaluata ca adevarata va determina repetarea ciclului - este denumita generic conditia de repetare
* instructiune2 se va executa la fiecare pas al repetarii
* instructiune3 reprezinta operatia efectiva care se repeta in cadrul FOR-ului
In general, [instructiune1] este o expresie de initializare de forma $i = 1, conditia de repetare este de forma $i <= numarul de pasi si [instructiune2] este o expresie de incrementare $i++.
```
for( $i = 1; $i <= 10; $i++ ) {
echo $i; // instructiune3
}
```
In limbaj natural, intructiunea se traduce prin "plecand de la $i = 1, executa operatia si creste-l pe $i cu o unitate; apoi repeta totul atat timp cat $i <= 10
De retinut: instructiunea din cadrul for-ului se executa doar daca expresia (conditia) este adevarata, deci pot exista situatii cand expresia este falsa si [instructiune3] nu se executa niciodata. Exemplu:
```
for( $i = 0; $i > 10; $i++ ) {
echo 'Aceasta instructiune nu se executa niciodata, pentru ca valoarea ' .
'initiala a lui $i nu este mai mare decat 10';
}
```
for( $i = 0; $i > 10; $i++ ) {echo 'Aceasta instructiune nu se executa niciodata'; } la inceput i este 0 dupaia verifica daca e mai mare decat 10 ... daca e mai mare afiseaza : "Aceasta instructiune etc etc etc..." daca nu... mareste pe i ... i devine 1 ... si tot asa pana cand i ajunge peste 10 si o sa tot afiseze aia la infinit sau nu afiseaza deloc pt k e o executie infinita... nu stiu exact daca asa e dar o sa verific...imi cer scz dak nu este asa De fapt este perfect corect. Atunci cand este evaluata conditia, in cazul in care aceasta nu este indeplinita, "for-ul" se opreste. Daca conditia nu este indeplinita de la inceput, nu va fi afisat nimic."For-ul" va functiona atata timp cat conditia este indeplinita. Piny ai expus corect ))) dar e mai usor de lamurit exemplu de sus prin conditia pusa )) $i>10.Nu $i nu este mai mare ca 10, din acest motiv echo nu va fi aplicat. Daca $i ar avea valoarea 11 atunci ar fi un ciclu infinit si dupa fiecare $i++ va fi aplicat echo in codul infinit. <?phpfunction suma($a,$b){ $total=$a+$b; return $total; } for($a=4,$b=5;$a<=10,$b<=10;$a++,$b++){ $total=$a+$b; echo suma($a,$b).'<br>'; } return $total; ?> for( [instructiune] , [conditie], [instructiune2] ) { [instructiune3] }sa se corecteze cu "instructiune1" sau mai jos cu instructiune :D conditia nu este adevarata pentru ca variabila a fost declarata initial 0.asadar nu putem avea $i=o si acelasi $i>10. for( [instructiune1] , [conditie], [instructiune2] ) { [instructiune3] }Trebuie inlocuita virgula cu punct si virgula.
Adauga un comentariu la aceasta sectiune.
## Structura repetitiva while 3
Instructiunea while este folosita atunci cand nu se cunoaste dinainte numarul de executii. Are o forma mai intuitiva decat for si multe persoane o considera mai usor de folosit. Diferenta dintre while si for este aceea ca prima este mai generala si mai flexibila. Se poate chiar afirma ca for este o situatie particulara a unei structuri while. Sintaxa este urmatoarea:
> while( [conditie] ) { [instructiune] }
Este probabil usor de inteles ca [instructiune] se executa atata timp cat [conditie] este adevarata. La fel ca si la for, exista posibilitatea ca instructiunea sa nu fie executata niciodata.
Mai jos este un exemplu de structura repetitiva while care are acelasi rezultat ca secventa de cod de mai sus ce foloseste for. Se poate observa ca in cazul structurii while conditia are la baza, de multe ori, o variabila initializata in exterior si modificata in interior. In cazul de mai jos, $i este modificata la fiecare executie, pana cand, la un moment dat, va fi mai mare decat 10 ceea ce va determina iesirea din bucla repetitiva.
```
$i = 0;
while( $i <= 10 ) {
echo $i;
$i++; // la fiecare pas $i creste cu o unitate
}
```
De retinut: este de datoria programatorului sa includa in blocul din structura repetitiva si codul necesar iesirii din bucla. Daca acest lucru nu se realizeaza, codul se va executa la nesfarsit. Exemplu:
';
}
echo 'Aici nu se mai ajunge';
Varianta corecta a exemplului de mai sus este urmatoarea:
';
$continua = false; # modific variabila de testare a conditiei
}
echo 'Acum se ajunge aici'; <?php$continua = true; while( $continua == true ) { echo 'La nesfarsit', '<br>'; break; } echo 'Aici nu se mai ajunge'; ?> Intrebare pe baza codului dat ca exemplu pentru executia infinita. Daca "scap" in executie scriptul, cum il pot opri/depana?$continua = true; while( $continua == true ) { echo 'La nesfarsit', '<br>'; } echo 'Aici nu se mai ajunge'; Presupunand ca scriptul e pus pe un server web si accesat in browser, cat timp pagina se incarca, scriptul va rula la infinit. Il poti opri fortat doar oprind incarcarea paginii, sau il va opri automat interpretorul PHP dupa o perioada de timp (de obicei 30sec).In orice caz, este o greseala sa ai un script infinit. Structurile repetitive trebuie sa aiba intotdeauna o modalitate de oprire.
Adauga un comentariu la aceasta sectiune.
## Structura repetitiva do... while 9
O alta structura repetitiva este do... while. Diferenta fata de while este ca verificarea de face la final, dupa ce se executa cel putin o data secventa de cod. O traducere in cuvintele noastre ar fi: "executa secventa si, cat timp conditia este adevarata, repet-o".
Structura do... while are urmatoarea forma:
Se poate observa o asemanare foarte mare cu structura while, singura diferenta fiind pozitionarea conditiei fata de codul executat, ceea ce asigura minim o executie a codului, chiar daca evaluarea conditiei determina iesirea din bucla. Astfel, daca la for si while este posibil ca secventa de cod din cadrul structurii repetitive sa nu se execute niciodata, in cazul do... while bucla va avea cel putin o executie.
In exemplul de mai jos codul se va executa o data, chiar daca $i nu corespunde conditiei. Verificarea lui se va face oricum dupa executarea blocului din interiorul structurii. Evaluarea conditiei va determina iesirea din structura repetitiva, fara o alta repetare a executiei.
Structura do... while este destul de rar folosita in practica, dar este utila pentru situatiile in care este nevoie de minim o executie a codului.
O aplicatie practica de folosire a structurii repetitive do... while este prezentata mai jos. Secventa de cod de mai jos afiseaza prima aparitie a valorii 0 intr-un sir de numere cu lungimea de minim 1.
```
/* avem un vector cu un numar necunoscut de valori; vrem sa cautam valoarea 0
* folosind o structura repetitiva. intrucat vectorul are minim 1 element, putem
* folosi structura do... while, care va face cel putin o repetare */
# operatia: se verifica daca elementul curent al vectorului este 0
# conditia de repetare: elementul curent nu este 0 si nu s-a ajuns la finalul vectorului
# nota: in acest caz putem afla numarul de elemente al vectorului, in functie de care
# putem defini numarul maxim de repetari, deci putem folosi si structura for
# alternativ putem folosi si structura while
// parcurgem vectorul pana cand ajungem la final sau gasim valoarea 0
do {
// presupun mai intai ca elementul de la repetarea curenta este 0
$gasit = true;
// verific daca elementul curent (initial primul element) este diferit de 0
// cu alte cuvinte, deocamdata nu am gasit ce caut, deci merg mai departe
if( $vector[ $pozitie ] != 0 ) {
$gasit = false;
}
// trec la pozitia urmatoare pentru verificare
$pozitie++;
} while ( !$gasit && $pozitie < count( $vector ) );
if( $gasit == true ) print "Am gasit 0 pe pozitia $pozitie";
else print "Nu am gasit 0 in vectorul asta";
Am o nelamurire aici: while ( !$gasit && $pozitie < count( $vector ) )Cum s-ar traduce asta? cat timp $gasit este fals si $pozitie mai mic decat lungimea sirului? si aici if( $gasit )- inseamna acelasi lucru cu if ($gasit=true)? Multumesc Cu alte cuvinte:while ( !$gasit && $pozitie < count( $vector ) )s-ar mai putea sicrie si while ( $gasit=false && $pozitie < count( $vector ) )? Am folosit alt sir$sir = array (1,5,7,3,0,8,5,6,9,0,2,4) si aplicand secventa mi-a tiparit doar ca exista 0 pe pozitia 4, dar m-ai am unul pe pozitia 9. De ce nu mi l-a gasit si pe acela? Salut Daniel, afirmatiile tale legate de $gasit sunt corecte.if($gasit) este echivalent cu if($gasit == true) iar if(!$gasit) este echivalent cu if($gasit == false). [Nota: atentie la == si =, sunt diferite!] Acest lucru este posibil deoarece variabila $gasit in sine poate fi evaluata la valoarea true sau false, cum de altfel si comparatia $gasit == true poate fi evaluata la o valoare boolean. Poti folosi oricare varianta ti se pare mai usor de urmarit si inteles. Iar legat de !$gasit, acea expresie contine operatorul de negatie ! care schimba valoarea variabilei (din true in false si invers). Se poate traduce cu "nu este $gasit" sau "$gasit este fals", exact cum ai spus si tu. Legat de exemplul cu do...while, codul va cauta primul element cu valoarea 0 apoi se va opri. Asa cum este spus si in comentariul de la inceputul codului, operatia se executa cat timp nu s-a gasit valoarea 0 si nu s-a ajuns la finalul sirului: while ( !$gasit && $pozitie < count( $vector ) );Daca vrei sa afiseze toate valorile de 0 trebuie sa modifici conditia de continuare a executiei si sa lasi doar verificarea pozitiei: while ( $pozitie < count( $vector ) ); Astfel se vor parcurge toate elementele sirului si nu se va mai opri cand $gasit este true. Mai trebuie sa muti ultimul if in interiorul structurii repetitive (ca sa afizeze toate pozitiile): if( $gasit ) print "Am gasit 0 pe pozitia $pozitie"; Voi modifica putin comentariile din cod ca sa fie mai clar. Multumesc, pentru raspuns, acum mi-e clar. legat de exemplul meu cu acel $sir, am incercat sa va urmez sfatulsi am introdus if-ul ultim in bucla, dar am avut probleme cu iesirea din bucla pentru ca imi rula la nesfarsit "Nu am gasit 0 in vectorul asta,Nu am gasit 0 in vectorul asta,Nu am gasit 0 in vectorul asta, Am gasit 0 in pozitia4, am gasit 0 in pozitia4......am gasit 0 in pozitia4...si tot asa pana la infinit, la zero de la pozitia 9 nu imi mai ajungea. Insa am reusit totusi sa fac cu for:<?php $sir = array (1, 5, 7, 3, 0, 8, 5, 6, 9, 0, 2, 4); for ($pozitie=0; $pozitie<=11; $pozitie++) { if ($sir[$pozitie]==0) print "Am gasit 0 pe pozitie $pozitie";} ?> unde anume trebuia sa introduc acel if in structura repetitiva pentru a reusi si cu do...while? eu l-am introdus imediat dupa ce am inchis blocul de instructiuni al primului if.. Adica chiar inainte sa inchid blocul de instructiuni al lui do si inainte de conditia de la while Era bine pus chiar inainte de terminarea blocului do { ... }. Ar mai fi trebuit stearsa linia cu else sa nu afiseze mesajul "Nu am gasit..." pentru elementele diferite de 0. Sau modificat mesajul in unul mai concludent :)Cat despre repetarea la infinit, da, codul nu se comporta cum trebuie cand era eliminata expresia !$gasit din conditia de la final. Asta din cauza ca incrementarea pozitiei ($pozitie++) se realiza doar la anumiti pasi (in cadrul if-ului), ceea ce era incorect. Am modificat exemplul astfel incat instructiunea $pozitie++ sa se execute independent de alte prelucrari din blocul de instructiuni. Acum daca mai faci o data modificarile tale pe noul exemplu ar trebui sa fie totul OK. Felicitari si pentru adaptarea exemplului la varianta cu for. Este corecta si functionala (desi as modifica 11 cu functia count($sir) pentru flexibilitate). Multumesc pentru raspuns si pentru felicitari :). am incercat acum modelul urmator:<?php $sir = array (1, 5, 7, 3, 0, 8, 5, 6, 9, 0, 2, 4); $pozitie=0; do { $gasit=true; if ($sir [$pozitie] !=0) { $gasit=false; } $pozitie++; if ($gasit==true) echo "Am gasit 0 pe pozitia $pozitie", "<br/>"; } while ( $pozitie<count ($sir)); ?> afiseaza:Am gasit 0 pe pozitia 5 Am gasit 0 pe pozitia 10. Acum, intr-o situatie normala, intradevar cei doi de zero se afla pe pozitiile 5 si 10 in sir. Dar avand in vedere ca intr-un sir prima pozitie este pozitia 0, nu ar fi trebuit sa imi afiseze 0 in pozitiile 4 si 9, la fel cum mi-a afisat folosind exemplul cu for? Ei bine, este corect ce se afiseaza pentru ca mai intai creste variabila $pozitie (prin $pozitie++) chiar inainte sa se afiseze mesajul. Din acest motiv iti apar valorile crescute in mesaj. As zice ca e mai corect asa, desi e o chestie interpretabila (eu as intelege ce inseamna pozitia 0, dar o persoana non-tehnica ar crede ca e gresit).Ca sa afiseze pozitia stricta (incepand de la 0) trebuie mutata instructiunea if inainte de $pozitie++.
Adauga un comentariu la aceasta sectiune.
## Iterarea cu foreach 6
PHP ofera o structura repetitiva foarte puternica si des folosita: foreach. Aceasta permite iterarea prin toate elementele unui vector. Pot fi folositi si vectori simpli si asociativi.
Spre deosebire de celelalte instructiuni, pentru foreach nu trebuie specificata explicit o conditie de oprire, fiind de datoria interpretorului PHP sa opreasca iterarea atunci cand s-a ajuns la finalul vectorului.
```
// parcurgerea unui vector cu foreach
$vector = array( 3, 4, 5, 1, 2, 9 );
// afiseaza: 3 4 5 1 2 9
foreach( $vector as $element) {
print "$element ";
}
```
```
// parcurgerea unui vector cu foreach, folosind cheile si valorile
$vector = array( 'a', 'b', 'c', 'd', 'e', 'f' );
// afiseaza: a b c d e f
foreach( $vector as $cheie => $element) {
print "$element ";
}
// afiseaza: 0 1 2 3 4 5
foreach( $vector as $cheie => $element) {
print "$cheie ";
}
// Nota: chiar daca $vector nu este asociativ in mod explicit,
// elementele sale oricum au chei implicite valorile 0, 1, 2, 3 ...
```
```
// vector asociativ (definit explicit cu chei)
$zile = array(
'luni' => 'Mo',
'marti' => 'Tu',
'miercuri' => 'We',
'joi' => 'Th',
'vineri' => 'Fr',
'sambata' => 'Sa',
'duminica' => 'Su'
);
// afiseaza Mo Tu We Th Fr Sa Su
foreach( $zile as $eng) {
print "$eng ";
}
// afiseaza si cheia si valoarea intr-un text
foreach( $zile as $rom => $eng) {
print "$eng inseamna $rom
```
";
}
/* afiseaza
Mo inseamna luni
Tu inseamna marti
We inseamna miercuri
Th inseamna joi
Fr inseamna vineri
Sa inseamna sambata
Su inseamna duminica
*/ Da ca sa afiseze toate zerourile din vector cum se face cu instryctiunea foreach() ???? eu am facut dar asa :$gasit = false; $pozitie = 0; // plecam de la primul element while( !$gasit || !( $pozitie == count( $vector ))) { if( $vector[ $pozitie ] == 0 ) { $gasit = true; print "Am gasit 0 pe pozitia $pozitie <br>"; } $pozitie++;
cu instr. foreach(), pina ce nu se primeste ))
Cu foreach e si mai simplu:$gasit = false; foreach( $vector as $pozitie => $element) { if( $element == 0 ) { $gasit = true; print "Am gasit $element pe pozitia $pozitie <br>"; // aici se poate apela break; daca e nevoie // sa se intrerupa cautarea dupa primul 0 } } if(!$gasit) { echo "Nu am gasit nici un 0"; }
Intradevar e mai simplu
Buna. pentru inceput am o nelamurire legata de variabilele de aici, $element, $cheie, $rom,$eng. Sunt cumva variabile standard? nu trebuiesc declarate sau initializate cumva? In sensul ca apar acolo si nu stiu de unde sa le iau...
Acele variabile vor fi definite de PHP si vor lua valorile elementelor din vector, pe rand.Poti folosi orice nume de variabila, iar interpretorul de PHP se va asigura ca in interiorul blocului foreach acea variabila va fi definita si initializata cu valoarea elementului curent din momentul parcurgerii. De ex. foreach( $vector as $element) { print "$element "; } In interiorul blocului variabila $element va fi initializata cu primul element din sirul $vector (la prima iteratie), apoi cu al doilea element (la a doua iteratie), s.a.m.d. pana cand variabila $element va lua pe rand toate valorile din $vector. Astfel, cate elemente are vectorul, atatea repetari se vor executa (si tot atatea valori va avea variabila $element). Similar se intampla si in varianta foreach( $vector as $cheie => $element), cu mentiunea ca variabila $element va prelua valoarea elementelor din vector (conform explicatiei de mai sus), iar variabila $cheie va prelua pozitia elementului din vector.
Adauga un comentariu la aceasta sectiune.
## Intreruperea fluxului de executie 0
In toate structurile repetitive prezentate mai sus, executia poate fi intrerupta, partial sau total, folosind instructiunile continue sau break.
### continue
Instructiunea continue permite oprirea iteratiei curente a unei structuri repetitive si trecerea imediata la iteratia urmatoare. Folosind continue toate instructiunile din blocul structurii sunt sarite pentru iteratia curenta, iar executia continua cu iteratia urmatoare. Un exemplu:
```
for($i = 0; $i < 10; $i++) {
/* la toate executiile in care $i <= 5, se va "sari" peste
* instructiunile print; practic, dupa apelul continue,
* executia "trece" la pasul urmator, crescandu-l pe $i */
if( $i <= 5 ) {
continue;
}
print $i;
print "
```
\n";
}
In exemplul de mai sus avem o structura repetitiva for cu 10 executii (de la 0 la 9). Pentru iteratiile in care $i este sub 6, se va executa instructiunea de control continue, care va sari peste tot ce urmeaza (doua instructiuni print) si va forta trecerea la iteratia urmatoare. Asadar, la finalul executiilor se vor afisa doar numerele de la 6 pana la sfarsit.
De retinut, continue afecteaza doar iteratia curenta; celelalte iteratii se pot executa complet, atat timp cat continue nu este apelata din nou. Nu este nici o diferenta de la o structura repetitiva la alta (for, while, do while) - continue se comporta la fel.
### break
Instructiunea break permite intreruperea totala a executiei structurii curente si trecerea la urmatoarea linie de cod. Folosind break toate instructiunile din blocul structurii sunt sarite pentru toate iteratiile (curenta si cele urmatoare). Practic se "iese" fortat din blocul structurii repetitive, iar fluxul de executie continua cu instructiunea de dupa structura repetitiva.
```
for($i = 0; $i < 10; $i++) {
// cand $i == 2, se va "iesi" din structura
if( $i == 2 ) {
break;
}
print $i;
print "
```
\n";
}
print "Gata!"; // cand $i == 2 se "sare" direct aici, ignorand celelalte iteratii
Spre deosebire de continue, instructiunea break afecteaza toate iteratiile unei structuri; nici o alta iteratie a blocului structurii nu se mai realizeaza dupa break.
Un apel break are acelasi efect pentru toate structurile repetitive. In plus, instructiunea se poate folosi si in alte structuri de control, cum este switch, avand ca efect "iesirea" din blocul de selectie.
### Structuri multiple
In cazul in care exista structuri repetitive multiple, incluse unele in interiorul celorlalte, instructiunile break si continue pot primi ca parametru nivelul structurii pe care o afecteaza. Spre exemplu, daca avem 2 for-uri ca in secventa de cod de mai jos, apelul break 1; se refera la for-ul din interior, pe cand break 2; se refera la primul for (cel cu $i).
```
$n = 7;
for($i = 0; $i < $n; $i++) {
// la fiecare iteratie a lui $i, avem un alt for
echo 'Iteratia #', $i, ': ';
for($j = 0; $j < $n; $j++) {
if( $i == 4 && $j == 4 ) {
break 2; // se aplica la for-ul mare
}
if( $j >= $n - $i ) {
continue 1; // se aplica la for-ul din mijloc
}
print "$j ";
}
print "\n";
}
```
De obicei, se recomanda evitarea unor astfel de constructii, intrucat codul devine greu de urmarit si pot sa apara usor probleme. De cele mai multe ori, nici nu va fi nevoie sa apelati continue sau break cu parametru, dar in orice caz, incercati sa evitati pe cat posibil intreruper iteratiilor pe mai multe nivele.
### Diferenta dintre break si exit
Intrucat am vorbit despre intreruperea executiei, este posibil sa apara confuzii intre break si exit sau die. Este adevarat ca atat break cat si celelalte intrerup executia secventei de cod, doar ca aceasta intrerupere se realizeaza la niveluri diferite. Astfel, break actioneaza doar la nivelul structurii din este apelat, pe cand exit si die actioneaza la nivelul intregului script PHP.
Instructiunea break va intrerupe executia structurii care o contine, dar va permite continuarea executiei cu alte instructiuni ce urmeaza. Exit sau die nu mai permit nici o alta executie.
Date: 2009-12-04
Categories:
Tags:
## Aplicatii - Structuri repetitive 11
Mai jos sunt prezentate cateva aplicatii legate de structurile repetitive. Sunt mai degraba portiuni de cod, care nu vor face decat sa va ajute sa va obisnuiti cu particularitatile limbajului PHP. Comentariile sunt binevenite.
De ce nu raspunde nimeni la comentarii? E un site parasit?Oricum, daca e cineva: Afisez select-box cu tarile si cineva le alege... cum fac sa ii citesc tara respectiva adica sa dea click pe un buton send iar informatia sa ajunga ca mine sau la operator si sa ii citeasca Tara... apoi spre exemplu orasul Salut Mihai,Ceea ce ai descris tu se poate realiza cu formulare. Sunt cateva exemple pe site care te-ar putea ajuta. Daca ai nevoie de mai multe informatii lasa un comentariu. PS: comentariile sunt citite, toate (am primit si mail-ul tau). Raspunsurile, ca si lectiile/exemplele, se fac in functie de timpul liber :)
exemplele sunt foarte bine venite,cu cat mai multe cu atat mai bine,sunt incepator si incerc sa gasesc logica scripturilor,mai ales ca uneori vad combinatii d php cu css si js ,ceea ce ma cam incurca,asvrea un raspuns pe emailul meu la intrebarea: scripturile php sunt excusiv php sau exista cazuri cand se amesteca cod php cu css si js? daca sunt exclusi php atunci sa inteleg ca se scrie codul php apoi in alt document(acelasi editor) se scrie un at cod in css si js si se salveaza in dosarul serverului apache sau iis?in acest fel ele se imbina pe pagina web,fie loca fie pe site direct,ci nu in acelai document creat cu 1 editor, e asa? multumesc
Notice: Undefined index: email in C:\Program Files\EasyPHP5.3.0\www\email.php on line 4 // aici $emails = $_POST[ 'email' ]; Am gasit un email valid: <EMAIL> Undefined index: email inseamna ca $_POST nu contine un element cu cheia 'email'.Cu alte cuvinte in form-ul tau nu ai nici un element <input name="email">. Verifica daca numele elementului scris corect. Mai multe detalii aici: http://php.punctsivirgula.ro/form/
acum mi-e clar :)
salut .. nu inteleg de ce aici imi da eroare .. deci imi apare in loc de tari 3 erori de tipul " Undefined variable: name in D/php/.... on line 10 acesta este codul pe care l-am facut <?php$tari = array( 'Ro'=>'Romania', 'En'=>'Anglia', 'Br'=>'Brazilia', ); echo '<select name="tari">',"\n"; foreach ($tari as $code => $nume) { echo '<option value="', $code, '">', $name, '</option>', "\n"; } echo "</select>"; ?> Erorile sunt cauzate de faptul ca folosesti variabila $name cand in cadrul foreach este definita variabile $nume (name/nume, sunt diferite).Si sunt 3 erori pentru ca instructiunea echo din cadrul structurii foreach se executa de 3 ori (cate o data pentru fiecare element al vectorului $tari), si la fiecare executie se genereaza o eroare. Corecteaza numele variabilei si va functiona. Ce conditie ar trebui sa pun la while ca sa nu se opreasca dupa ce imi gaseste primul 0? Acelasi lucru ca si la un exemplu anterior. Daca sirul are mai multi de 0 si eu vreau sa ii gasesc pe toti. Am facut exemplul cu ciclul for si mi-a iesit mult mai usor. Sincer mi se pare greoi cu $gasit acela...Imi cam face creierul franjuri :). Cand am impresia ca nu l-am inteles imi dau seama ca m-am adancit si mai mult in misterul lui.Acesta este exemplul meu cu ciclul for: <?php $sir= array (0, 3, 4, 5, 1, 2, 9, 76, 42, 2, 9, 6, 0, 4, 1, 10,0 ); $n=count($sir); for ($i=0; $i<$n; $i++) {if ($sir[$i]==0) {echo "Am gasit 0 pe pozitia $i","\n","<br/>"; } } ?> Dar si la acest exemplu as dori sa va intreb ceva. Cum fac sa afisez in cazul in care sa zicem ca nu exista 0 in sir, un mesaj "In acest sir nu este 0"? Daca il bag in interiorul lui for mi-l afiseaza de fiecare data. Daca il bag in afara structurii, atunci la exemplul meu mi-l afiseaza, dar dupa ce imi afiseaza "Am gasit 0 pe pozitia..." "am gasit 0 pe pozitia...". As fi vrut cumva sa mi-l afiseze doar daca nu e 0 in sir. Dar in acelasi timp, sa il introduc in cod dar daca este 0 in sir sa nu mi-l afiseze? Sper ca nu m-am exprimat prea alambicat. Multumesc :) Prima intrebare:Daca vrei sa afiseze toate valorile de 0 trebuie sa modifici conditia de iesire din bucla si sa pui doar verificarea pozitiei: while ( $pozitie < count( $vector ) ); Astfel se vor parcurge toate elementele sirului si nu se va mai opri cand $gasit este true. A doua intrebare: Ca sa afisezi un mesaj cand nu gasesti elementul, va trebui cumva sa marchezi faptul ca elementul nu exista in vector. Asta o poti face doar printr-o variabila, care sa ia o anumita valoare doar cand elementul este intalnit (tot nu scapi de $gasit :) Asadar, in cadrul if-ului, langa echo, va trebui sa pui $gasit = true, sau $marcaj = 1; sau orice altceva (nu trebuie sa folosesti mereu $gasit). Variabila asta o initializezi la inceput de tot cu o valoare diferita, sa zicem $marcaj = 0; si dupa la final, bucla for, mai faci o verificare: daca variabila are aceeasi valoare ca la inceput, if ($marcaj == 0), atunci inseamna ca elementul nu exista (nu s-a intrat niciodata in if-ul din interiorul for-ului) si in cazul asta poti afisa un mesaj specific. Sper ca are sens pentru tine ce zic. Succes! Ca sa le gaseasca pe amandoua codul este urmatorul, daca interseazaz pe cineva ;)<?php @$emails = $_POST[ 'email'] ; $emails = array( '<EMAIL>' , '<EMAIL>' , 'zzzz' , 'aaa' ); $gasitInvalid = false ; $n = count( $emails ); $i=0; while ($gasitInvalid == false && $i < $n) { if (strpos ($emails[$i], '@') === false || strpos ($emails[$i], '.') === false) { $gasitInvalid = true; echo 'Am gasit un mail invalid: ' , $emails[$i] , "<br>" ; } $i++; $gasitInvalid = false ; } $gasitInvalid = true ; if (!$gasitInvalid) echo 'Toate email-urile sunt valide!<br>'; ?Adauga un comentariu la aceasta sectiune.
## Cum definesc un vector ale carui elemente sa aiba valori incrementale?
```
/* vrem sa construim un vector in care fiecare element sa fie initializat
* cu o valoare dinamica si incrementala; avem, asadar, o aceeasi operatie
* care se repeta de mai multe ori, motiv pentru care vom folosi un for */
$vector = array();
$n = 10;
# operatia repetitiva: definirea si initializarea unui element al vectorului
# conditia de continuare: inca nu s-au efectuat $n repetari
# nota: in acest caz cunoastem numarul de repetari, dat de variabila $n
for( $i = 1; $i <= $n; $i++) {
# intre paranteze se specifica expresia de initializare $i = 1
# conditia de continuare $i <= $x si o expresie de iterare $i++
$vector[ $i ] = "php$i"; # aceasta este operatia
}
print_r( $vector );
/*afiseaza:
Array
(
[1] => php1
[2] => php2
[3] => php3
[4] => php4
[5] => php5
[6] => php6
[7] => php7
[8] => php8
[9] => php9
[10] => php10
)
*/
```
## Cum afisez o lista de link-uri?
```
cu valori dinamice
# conditia de continuare: inca nu s-au efectuat $n repetari
# va trebui sa avem lista de link-uri definita intr-un vector.
$links = array(
'www.punctsivirgula.ro',
'php.punctsivirgula.ro',
'php.punctsivirgula.ro?despre',
'php.punctsivirgula.ro?legal'
);
# determin lungimea listei
$n = count( $links );
# avand lungimea, folosesc o structura for pentru a afisa lista
echo '
```
?>Rezultat: > – www.punctsivirgula.ro
– php.punctsivirgula.ro – php.punctsivirgula.ro?despre – php.punctsivirgula.ro?legal
## Cum afisez un element de tip select cu valori de la 1 la 100?
```
/* similar exemplului anterior */
$end = 100;
# operatia repetitiva: afisarea unui element de tip
```
Rezultatul codului de mai sus este urmatorul:
## Cum afisez un element de tip select cu toate tarile?
```
'Afghanistan',
'AL'=>'Albania',
'DZ'=>'Algeria',
'AS'=>'American Samoa',
'AD'=>'Andorra',
'AO'=>'Angola',
'AI'=>'Anguilla',
'AQ'=>'Antarctica',
'AG'=>'Antigua And Barbuda',
'AR'=>'Argentina',
'AM'=>'Armenia',
/* ... */
'WS'=>'Western Samoa',
'YE'=>'Yemen',
'YU'=>'Yugoslavia',
'ZM'=>'Zambia',
'ZW'=>'Zimbabwe'
);
# avand un array asociativ, este mai dificil sa-i accesam elementele, asa ca nu vom
# mai folosi for. Vom folosi in schimb un iterator
echo '\n";
Rezultat:
## Cum caut o valoare intr-un vector?
```
/* avem un vector cu un numar necunoscut de valori; vrem sa cautam valoarea 0
* folosind o structura repetitiva */
# operatia repetitiva: se verifica daca elementul curent al vectorului este 0
# conditia de oprire: elementul curent este 0 sau s-a ajuns la finalul vectorului
# nota: desi putem afla numarul de elemente al vectorului, in functie de care
putem determina numarul maxim de repetari, se va folosi structura while
// parcurgem vectorul pana cand ajungem la final sau gasim valoarea 0
while( !$gasit ) { // echivalent cu while( $gasit == false )
// verific daca elementul curent (initial primul element) este 0
if( $vector[ $pozitie ] == 0 ) {
$gasit = true;
print "Am gasit 0 pe pozitia $pozitie";
}
// trec la pozitia urmatoare pentru verificare
$pozitie++;
// spre deosebire de "for" incrementarea trebuie realizata explicit
// verific daca am ajuns la sfarsitul verctorului
if( $pozitie == count( $vector ) ) {
$gasit = true; // ca sa nu mai repete
print "Nu am gasit 0 in vectorul asta";
}
}
# in limbaj natural, instructiunea while se poate transpune in "cat timp
# conditia este indeplinita executa operatiunea"
Exemplul de mai sus este dat pentru a intelege cand si de ce se foloseste 'while'. Toata verificarea se poate face mult mai rapid folosind o functie oferita de limbaj: array_search.
$pozitie = array_search(0, $vector );
if( $pozitie === false ) print "Nu am gasit 0 in vectorul asta";
else print "Am gasit 0 pe pozitia $pozitie";
```
## Cum validez mai multe email-uri introduse de un utilizator?
```
';
//break;
/* pot folosi break dar este inutil pentru ca se va iesi oricum
din bucla while datorita variabilei $gasitInvalid care e true */
}
$i++; # cresc pe $i
}
# daca s-a ajuns la sfarsitul sirului de email-uri si $gasitInvalid este tot false
# inseamna ca nu exista nici un element invalid
if( !$gasitInvalid ) echo 'Toate email-urile sunt valide!
```
?Rezultat:
> Am gasit un email invalid: zzz
Nota: se putea folosi si structura repetitiva for pentru ca se cunosteau dinainte numarul de pasi ce trebuiau efectuati. In schimb, folosind aceasta implementare, atunci cand se gaseste un email valid se opreste executia (se iese din bucla while), asadar codul ar putea fi mai performant decat in cazul folosirii for.
Spre exemplu, daca avem un vector cu 15 elemente, iar pe pozitia a doua se afla un email invalid, atunci executia se termina dupa 2 pasi: la primul pas se verifica primul element care este valid, iar la al doilea, gasindu-se un email invalid se iese din ciclul while (din cauza ca $gasitInvalid nu mai este false).
Date: 2010-07-10
Categories:
Tags:
## Operatii cu vectori 6
Operatiile clasice de cautare, sortare, inserare, etc se pot realiza foarte usor cu ajutorul functiilor specializate, fara sa fie necesara parcurgerea manuala a vectorilor. Iata cateva exemple.
```
$vector = array( 1, 2, 3, 4, 5, 6 );
$vectAs = array(
'unu' => 'one',
'doi' => 'two',
'trei' => 3
);
// lungimea unui vector (numarul de elemente)
print count( $vector ); // 6
// se gaseste o valoare in vector?
print in_array( 3, $vector ); // true
// exista o anumita cheie in vector?
print array_key_exists( 'trei', $vectAs ); // true
// returneaza toate cheile din vector:
print_r( array_keys( $vectAs ) ); // Array ( [0] => unu [1] => doi [2] => trei )
// returneaza toate valorile din vector:
print_r( array_values( $vectAs ) ); // Array ( [0] => one [1] => two [2] => 3 )
// returneaza o parte din vector
print_r( array_slice( $vector, 3 ) ); // de la pozitia 3 pana la final: 4, 5, 6
print_r( array_slice( $vector, 3, 2 ) ); // de la pozitia 3, doua elemente: 4, 5
print_r( array_slice( $vector, -5, 3 ) ); // de la final inapoi 5 pozitii: 2, 3, 4
// foloseste cheile unui vector ca variabile:
extract( $vectAs );
print $unu; // one
print $doi; // two
print $trei; // 3
// sorteaza un vector
sort( $vector ); // 1, 2, 3, 4, 5, 6
rsort( $vector ); // 6, 5, 4, 3, 2, 1
asort( $vectAs ); // sorteaza valorile si mentine asocierea cheilor
// preluare elemente unice din vector
print_r( array_unique( $vector ) );
// adauga o valoare la sfarsitul unui vector
$vector[] = 7;
array_push( $vector, 8 );
# nota: cele 2 instructiuni de mai sus sunt echivalente
// modifica un anumit element din vector
$vectAs[ 'trei' ] = 'three';
```
Am incercat sa "creez" un vector cu elementele trimise printr-un formular<!--Formularul--> <html> <head><title>Fill array</title></head> <body> <form action="" method="post"> <input type="text" name="element" /> <input type="submit" value="Update array" /> </form> </body> </html> <hr /> <?php $element=$_POST["element"]; //daca $vector exista, atunci adauga $element la sfarsit //daca nu, atunci creez vectorul si afisez "vector gol" if (isset($vector)) { array_push($vector,$element); } else { $vector=array(); echo "Vector gol"; }; ?> Problema e ca imi afiseaza mereu "Vector gol", e o problema la acel isset. Ai idee ce are? Saru'mana Salut,Problema vine de la faptul ca toate variabilele definite in PHP sunt ne-persistente (stateless). Asta inseamna ca ele exista si au o valoare doar cat timp se proceseaza acel script. La terminarea executiei codului PHP valorile se pierd. Astfel, la o noua procesare a aceluiasi script (adica la un refresh al paginii, cand codul PHP se executa din nou) toate variabilele se reinitializeaza si nu "tin minte" valorile de dinainte. Asta inseamna ca o navigare printr-un site cu mai multe pagini PHP consta intr-o serie de executii independente ale scripturilor PHP si, in mod normal, o pagina "nu stie" ce variabile s-au definit pe alte pagini, sau la o executie anterioara. In exemplul tau, cand faci submit la formular se initializeaza variabila $vector, dar la terminarea executiei valoarea ei se pierde. Cand faci un nou submit, se repeta aceeasi poveste, fara ca PHP-ul sa faca vreo legatura intre cele 2 submit-uri. Acesta este comportamentul implicit. Exista totusi si un mecanism de a persista date de-a lungul navigarii de la o pagina la alta. Acest mecanism poarta numele de sesiune. Vezi detalii la http://php.punctsivirgula.ro/sesiuni/ Iata mai jos codul modificat ce foloseste sesiuni: <?php session_start(); // deschid o sesiune de navigare $vector = $_SESSION['vect']; // preiau vechea valoare, daca exista $element=$_POST["element"]; //daca $vector exista, atunci adauga $element la sfarsit //daca nu, atunci creez vectorul si afisez "vector gol" if (isset($vector)) { array_push($vector,$element); } else { //$vector=array(); $vector=array($element); // adaug elementul si acum echo "Vector gol"; }; // la final persist valoarea lui $vector $_SESSION['vect'] = $vector; ?La aplicarea lui print in_array pentru a vedea daca un anumit element este intr-un sir, imi apare valoarea 1 daca acesta apartine sirului si in cazul in care nu se regaseste in sir nu imi apare nimic. Nu imi apare true sau false. Gresesc undeva? Sau e in regula sa imi apara in felul acesta. Multumesc
Este in regula sa apara asa pentru ca rezultatul este evaluat ca string atunci cand este afisat. Iar la transformarea in string, valoarea booleana "true" devine "1" iar valoarea "false" devine sirul gol "".Pentru a afisa o reprezentare mai corecta a valorilor de tip boolean, foloseste var_dump in loc de print. Am incercat sa vad cum functioneaza sort<?php $sir = array (2,4,3,7,1,6,8,9,3); sort($sir); ?> iar cand ii dau executie nu imi apare nimic. Ar trebui sa mi-l ordoneze crescator sau sa mi-l afiseze pur si simplu asa cum este? Mentionez ca am incercat dupa sort ($sir) am incercat un print ($sir)si imi da o eroare ca si cand nu ar recunoaste ceva...
Scuze pentru deranj, dar mi-am dat seama intre timp. Am folosit print_r ($sir) pentru siruri si asa mi l-a ordonat inradevar creescator. :)
## Aplicatie: Sa se afiseze tipul browserului folosit de utilizator 5
Vom folosi variabila predefinita $_SERVER care contine elementul HTTP_USER_AGENT. Acesta contine textul de identificare transmis de browser catre server. Vom cauta in acest text denumirile browserelor cunoscute si vom afisa un mesaj corespunzator.
Pentru inceput definim o lista de browsere cunoscute.
`` In continuare vom lua fiecare browser si vom verifica daca se afla in textul de identificare al browserului.
```
$b ) {
// $b va avea pe rand fiecare valoare din vectorul $browser
// $i preprezinta pozitia lui $b in vector
if( stristr( $_SERVER[ 'HTTP_USER_AGENT' ], $b ) ) {
$gasit = true;
print "Folositi {$browser[ $i ]}!
```
";
}
if( $gasit ) break; // opreste cautarea daca browserul a fost identificat
}
?>Rezultatul este afisat mai jos (nota: incercati sa accesati aceasta pagina cu browsere diferite pentru a vedea ca mesajul de schimba): `Folositi Chrome!`
ce face functia stristr()?
Cand nu sti ce anume face o functie, doar apasa click pe ea in exemplele date si vei gasii acolo toata documentatia de care ai nevoie legata de acea functie.
Imi da eroarea "undentified index : HTTP_USER_AGENT "????CE S INTAMPLA??? if( stristr( $_SERVER[ 'HTTP_USER_AGENT' ], $b ) )Daca browserul $b se regaseste in $_SERVER[ 'HTTP_USER_AGENT' ]atunci... e bine cum am inteles acel if? si daca l-am inteles bine s-au nu ce reprezinta $_SERVER[ 'HTTP_USER_AGENT' ]? este cumva un sir? Pentru Bogdan:Probabil browser-ul tau este configurat sa nu transmita identificatorul sau si din acest motiv HTTP_USER_AGENT nu este definit. Acest lucru este perfect valid si un script PHP nu poate controla ce primeste de la un utilizator Pentru Daniel: HTTP_USER_AGENT este un text (un string) transmis de browser atunci cand face un request catre server-ul web. Acel string depinde de la aplicatie la aplicatie, dar mai toate browserele transmit numele lor, versiunea si sistemul de operare. Un exemplu de text user-agent este: "Mozilla/5.0 (Windows NT 6.2; WOW64; rv:26.0) Gecko/20100101 Firefox/26.0" - care identifica browserul Firefox, versiunea 26.0 si OS Windows. In acel script se verifica, pe rand, daca elementele vectorului se gasesc in textul de user-agent, si daca da, este afisat un mesaj. Vezi in manualul PHP ce returneaza functia strstr (http://ro1.php.net/stristr) Nota: ca sa verifici ce user-agent transmite browser-ul tau, acceseaza site-ul http://whatsmyuseragent.com/
Adauga un comentariu la aceasta sectiune.
## Depanare 0
```
// afisarea unui 'dump' al vectorului - o lista cu toate elementele.
//De obicei se foloseste la depanare:
print_r( $vectAs );
/* afiseaza
Array
(
[unu] => one
[doi] => two
[trei] => 3
)
*/
// se afiseaza la fel ca mai sus o lista cu elementele vectorului, doar
// ca se specifica si tipul fiecaruia.
var_dump( $vectAs );
/* afiseaza
array(3) {
["unu"]=>
string(3) "one"
["doi"]=>
string(3) "two"
["trei"]=>
int(3)
}
*/
```
## Operatii comune cu indicatori de timp 6
Limbajul PHP ofera solutii simple (dar puternice) pentru lucrul cu data si timpul. In versiunile mai noi (dupa 5.1) sunt introduse facilitati avansate cum ar fi DST (Daylight Saving Time), Timezones, Date Intervals, s.a. Inainte de acestea insa lucrul cu timpul se realiza folosind Unix timestamp.
Conceptul de Unix timestamp desemneaza numarul de secunde scurse de la 1 ianuarie 1970 (cand se considera ca a inceput era Unix).
> Unix Timestamp-ul curent: 1699745159
Fiind vorba de un numar pozitiv ce are mereu aceeasi referinta, este usor sa se realizeze operatii cum ar fi scaderea, adunarea sau compararea a doua date reprezentate printr-un Unix timestamp.
Mai jos sunt cateva aplicatii menite sa evidentieze cat de usor se pot manipula date si ore in PHP.
ce data este pentru 1253137399?
@setlocale(LC_ALL, 'ro_RO.ISO8859-2', 'ro_RO.ISO-8859-2', 'ro', 'ro_RO', 'ro_RO', 'rom', 'romanian');
Nu prea merge cu @setlocale (sau poate nu stiu sa-l folosesc). Poate sa dea cineva mai multe detalii ?
$t = setlocale(LC_ALL, 'ro_RO.ISO8859-2', 'ro_RO.ISO-8859-2', 'ro', 'ro_RO', 'ro_RO', 'rom', 'romanian');//echo "Pe acest sistem locala Ramina '$t'" . '<br>'; //echo strftime("%A %e %B %Y", mktime(0, 0, 0, 12, 22, 1978)) . "<br>";//vineri decembrie 1978 echo strftime("%A, %d %B %Y") . "\n";//sambata, 05 mai 2012
cum as putea face adunarea si scaderea a doua momentde timp?
Vezi exemplul de mai jos intitulat "Cum determin ce zi si ora va fi peste 480 ore?"
## Cum aflu o data din trecut sau viitor? 3
Sunt mai multe posibilitati. Se poate pleaca de la o data cunoscuta (pentru care avem Unix timestamp) si prin adunari/scaderi de secunde se afla data dorita (vezi aplicatia de mai jos). Sau se poate parsa o data folosind strtotime. Aceste solutii nu sunt intotdeauna aplicabile, iar in aceste cazuri se foloseste functia mktime.
Functia mktime primeste ca parametri anul, luna, ziua, ora pentru care se doreste aflarea detaliilor. Rezultatul este un timestamp ce poate fi formatat dupa cum se doreste.
Important: inainte de PHP 5.1.0, timestamp-urile negative nu erau suportate in unele sisteme (inclusiv Windows). De aceea domeniul valid al anilor era limitat intre 1970 si 2038.
tare asta
echo strftime("Ultima zi din Februarie 2000 este: %d", $ultima );care este rolul lui $d ? ce insemna acesta? Constructia "%d" din acel string reprezinta un indicator specific functiei "strftime" prin care se exprima modul de afisare a variabilei "$ultima".%d inseamna "ziua lunii", deci din timestamp-ul $ultima se va extrage ziua si va fi afisata in locul lui "%d". Pentru detalii despre acesti indicatori de formatare a datei, vezi manualul PHP: http://ro1.php.net/strftime
Adauga un comentariu la aceasta sectiune.
## Cum se formateaza data sau ora? 1
Formatarea datei sau a orei se poate realiza usor folosind functia date. Cu ajutorul acesteia se pot "extrage" doar acele parti din data care sunt necesare si se poate specifica formatul dorit. Spre exemplu, folosind functia date putem afisa data in format lung (de genul: "marti, 30 septembrie 2001") sau scurt (30.11.2001) sau cu aceeasi functie putem afisa ora sau oricare combinatie a celor doua.
Formatul se specifica print-un text format din caractere ce au o semnificatie anume, transmis ca parametru functiei date. Pentru detalii complete despre fiecare caracter in parte accesati http://www.php.net/manual/ro/function.date.php.
```
"; // afiseaza 01:25:59
echo date("Y-m-d") . "
```
"; // afiseaza 2023-11-12
echo date("F j, Y, g:i a") . " "; // afiseaza November 12, 2023, 1:25 am
echo date("l") . " "; // afiseaza Sunday
# se poate specifica un al doilea parametru ce reprezinta data/ora care se formateaza
echo date("d-m-Y H:i:s", $acum) . " "; // afiseaza 12-11-2023 01:25:59
echo date("D, M d, Y", 1072915200) . " "; // afiseaza Thu, Jan 01, 2004
# combinat cu mktime, se pot obtine informatii despre zile din trecut sau viitor
# de exemplu, ce zi a fost pe 4 iulie 2000
$iulie4 = mktime( 0, 0, 0, 7, 4, 2000 ); # aflu timestamp-ul
echo date( 'l', $iulie4 ); # formatez acel timestamp - extrag doar ziua
// afiseaza Tuesday
?E foarte folositor ceea ce faceti. Multumesc!
## Cum determin ce zi si ora va fi peste 480 ore? 5
Pentru a efectua usor operatii cu zile si ore, se va folosi formatul de data Unix (UNIX timestamp), returnat de functia time(). Practic, data este afisata ca numarul de secunde scurs de la 1 ianuarie 1970, ora 00:00. Avand aceasta noua perspectiva (de a privi datele ca un numar de secunde), operatiile cu date (de genul 'cat va fi peste 2 zile', 'ce ora a fost acum xxx minute', etc) devin extrem de simple luand forma unor simple scaderi si adunari de secunde.
<?php$acum= time(); $d= 100*365*24*3600; echo "Peste 100 de ani va fi data de: " . date( "l, d.m.Y, H:i:s", $acum+$d); ?> imi afiseaza Peste 100 de ani va fi data de: Tuesday, 06.12.1977, 14:01:42 unde gresesc de nu imi afiseaza data corecta? Datorita unei limitari in PHP, functia date suporta repere de pana la 19 ianuarie 2038. Dupa aceasta data, timpul este masurat (in continuare) de la inceputul intervalului valid, ceea ce poate rezulta in date aparent fara nici o noima.Am explicat mai detaliat aici: http://www.punctsivirgula.ro/289/data-in-viitor-cu-strtotime/ In principiu, trebuie sa ai grija ca datele tale sa nu depaseasca 2038 si totul va fi OK. $acum= time();$maitarziu=2*365*24*3600; for($i=(date('y', $acum)), $i<=(date('y', $acum+$maitarziu)), $i++) {if($i%4==0) {$a+=24*3600; } } echo "Peste 2 de ani va fi: ", date('l, d-m-y h:i:s', $acum+$maitarziu)."<br>"; Incerc sa rezolv chestia cu anii bisecti, insa nu reusesc. Ma poate ajuta cineva? $acum= time();$maitarziu=2*365*24*3600; for($i=(date('y', $acum)); $i<=(date('y', $acum+$maitarziu)); $i++) {if(($i%4==0) && ($i%100!=100)) {$maitarziu+=24*3600; } } echo "Peste 2 ani va fi data de: ", date('l, d-m-y h:i:s', $acum+$maitarziu)."<br>"; Gata, am rectificat. In principiu, eroarea era ca am pus "," in loc de ";" in cadrul for-ului. Multumesc oricum!
Da, intr-adevar, sintaxa folosita nu era corecta. Ma bucur ca s-a rezolvat.
## Cum validez o data sau un timp? 7
Sunt mai multe modalitati de a valida datele si orele introduse de un utilizator, in functie de formatul acestora. Cel mai uzual insa se folosesc cele 2 functii de aflare a timestamp-ului: mktime si strtotime. Acestea returneaza anumite valori in cazul in care data verificata nu este valida.
Rezultatul exemplului cu strtotime() este gresit. Nu exista 29 februarie 2010
Daca pun data 35 februarie 2010 orikum imi da rezultat Valid ? nu inteleg dece si ku functia strtotime la fel ...
Exemplele astea doua sunt niste porcarii...poti pune si 80 februarie 2004 si tot la fel afiseaza "Data este valida"..luati si corectati...
Functia strtotime este "aditiva" in ceea ce priveste zilele, adica va "trece" in luna urmatoare.Spre exemplu "31 february 2010" va fi considerat "3 march 2010". Practic diferenta de 3 zile, de la 28 cat are luna februarie (in 2010) a fost adaugata in luna urmatoare. Intr-adevar, nu este o modalitate buna de validare in acest fel, voi actualiza exemplul pe site. Exemplu cod cu strtotime: $rezultat = strtotime( "31 february 2010" ); if( $rezultat === FALSE || $rezultat === -1 ) { echo "Data nu exista"; } else { echo "Data este valida: " . date( 'd.m.Y', $rezultat); // Data este valida: 03.03.2010 }
Functia strtotime se foloseste in principal pentru a valida date exprimate in limbaj natural, cum ar fi "2 hours ago", "in 3 weeks", "first Friday in December", etc. Datele exprimate absolut (zi, luna, an) se pot valida prin cealalta metoda.
<?php$m=strtotime ("87 years ago"); date_default_timezone_set("Europe/Bucharest"); if ($m===false||$m===-1) echo "Nu exista aceasta data"; else echo "Acum 87 de ani a fost data de: " . date ("l, d.m.Y, H:i:s", $m); ?> Imi afiseaza Acum 87 de ani a fost data de: Saturday, 05.02.1927, 20:40:59 Cum fac folosind aceasta metoda sa verific ce data va fi peste 87 de ani? Foloseste expresia "+x years" la apelul strtotime. De exemplu:$m=strtotime("+10 years"). Trebuie sa tii cont, totusi, de o limitare a PHP-ului in reprezentarea timpului. Detalii la http://www.punctsivirgula.ro/289/data-in-viitor-cu-strtotime/
Adauga un comentariu la aceasta sectiune.
## Cum afisez un mesaj de salut in functie de ora curenta (a serverului)? 0
```
= 7 && $h <= 11 ) print "Buna dimineata!";
elseif( $h > 11 && $h < 18 ) print "Buna ziua!";
elseif( $h >= 18 && $h < 22 ) print "Buna seara!";
elseif( $h >= 22 ) print "Noapte buna!";
else print "Ce? Esti treaz(a) la ora asta?";
// Ce? Esti treaz(a) la ora asta?
?>
```
## Cum afisez luna curenta si toate cele 12 luni in limba romana? 1
Nota: aceasta secventa de cod depinde de configurarile serverului pe care este executata. Limba romana trebuie instalata pe server pentru a avea rezultatele scontate.
`` Rezultatul (este posibil sa nu fie in limba romana!): > January February March April May June July August September October November December Luna curenta este: November
O varianta sigura ar fi urmatoarea:$timp_setat=time(); //Timpul de la care dorim sa aflam ziua si luna in limba Romana $ziua=array('Duminica','Luni','Marti','Miercuri','Joi','Vineri','Sambata')[date('w',$timp_setat)]; //Variabila care afiseaza ziua $luna=array('','Ianuarie','Februarie','Martie','Aprilie','Mai','Iunie','Iulie','August','Septembrie','Octombrie','Noiembrie','Decembrie')[date('n',$timp_setat)]; //Variabila care afiseaza luna echo'Ziua setata este '.$ziua.', iar luna setata este '.$luna;
Adauga un comentariu la aceasta sectiune.
Date: 2011-08-10
Categories:
Tags:
## Instructiuni uzuale folosite in PHP 1
Pe langa instructiunile comune amintite in lectiile anteriorare - instructiunile de afisare, instructiunile de control, instructiunile pentru vectori, string-uri sau timp - exista o serie de alte functii ale limbajului PHP care sunt folosite cu precadere de orice programator.
Detaliem in cele ce urmeaza cateva dintre ele.
### Functia mail()
PHP poate trimite emailuri daca pe serverul curent este instalata si o aplicatie de email (un server de email). Nu este nevoie de configurari suplimentare atat timp cat serverul de email nu are restrictii si poate fi accesat de aplicatiile locale. Intr-o instalare pe calculatorul personal, cum este cea descrisa in primele pagini ale acestui tutorial, nu este posibila trimiterea de mesage email, iar functia mail va returna o eroare. Pe un web-server configurat complet si corect, cum sunt serverele ce ofera gazduirea site-urilor, functia mail functioneaza corect.
### Functii matematice
Limbajul PHP dispune de majoritatea functiilor matematice uzuale. Cateva dintre ele sunt prezentate mai jos cu titlu exemplificativ. Lista completa poate fi gasita pe site-ul de documentatie PHP.
### Functia set_time_limit()
Functia set_time_limit se foloseste pentru a configura timpul maxim cat scriptul curent are voie sa se execute. Este utila atunci cand codul PHP trebuie sa execute un volum mare de operatii care ar putea dura cateva zeci de secunde (spre exemplu la un upload de fisiere). Daca dupa expirarea timpului scriptul inca se afla in executie, serverul va intrerupe executia fi va returna o eroare.
### Functia flush()
Functia flush trimite catre browser tot ceea ce a fost afisat deja de script. In mod normal, rezultatul unui script PHP este trimis catre browser pentru afisare doar la finalul executiei intregului script. Aceasta functie ofera posibilitatea trimiterii catre browser a rezultatului pe parcurs ce acesta este printat in PHP. Asta inseamna ca pagina poate fi afisata partiala in timp ce inca se incarca.
```
';
echo 'Text 2
```
';
flush();
?### Terminarea executiei
Sunt situatii in care se doreste oprirea executiei unui script PHP. Acest lucru este posibil folosind una din cele doua functii de mai jos.
`` Nota: instructiunile die si exit sunt echivalente.
### Functii de transformare base64
Functiile de codificare base64 sunt folosite pentru codificarea/decodificarea unui text in si din formatul Base64.
### Functia phpinfo()
Functia phpinfo ofera informatii despre instalarea PHP curenta si despre serverul pe care aceasta este executata. Rolul acestei functii este pur informativ; functia nu poate fi folosita intr-un script care are un alt scop bine definit. De aceea, este recomandata apelarea acestei functii intr-un script PHP separat, intrucat aceasta creaza o pagina HTML completa.
O alta functie, ce poate fi utila in scripturile voastre, este phpversion - aceasta returneaza doar versiunea curenta a interpretorului PHP. Un exemplu al folosirii acestei functii este mai sus la functia mail.
mie numi merge functia set_time_limit imi da o eroare
## Extensii ale limbajului PHP 1
Pe langa functiile de baza, limbajul PHP ofera posibilitatea extinderii capacitatilor sale prin folosirea extensiilor. Extensiile sunt librarii dezvoltate de regula de cei care au creat limbajul PHP si care ofera functionalitati aditionale, cum ar fi posibilitatea de a manipula anumite tipuri de fisiere (PDF, Excel, Word), posibilitatea de a crea imagini, de a comunica cu alte aplicatii, etc.
Extensiile cele mai uzuale sunt activate implicit la instalarea interpretorului PHP, dar cele mai multe trebuie activate manual prin modificarea configurarii PHP. Intrucat in acest tutorial nu vom folosi nici o functionalitate care sa necesite vreo extensie a PHP-ului, nu vom insista asupra acestora. E bine de stiut doar ca pentru operatiuni specializate si/sau avansate, sunt sanse mari sa existe o extensie care sa usureze lucrul.
felicitari pentru site
Date: 2009-04-09
Categories:
Tags:
## Functii in PHP 4
Functiile sunt blocuri de cod PHP (secvente de cod) bine delimitate si identificate printr-un nume, ce executa un set de operatii. Functiile pot fi executate de mai multe ori in cadrul unui script prin simpla apelare a numelui lor.
Exista functii predefinite, specifice limbajului PHP (cum ar fi print, empty, etc) ce poti fi folosite in orice moment, fara a fi nevoie de vreo actiune speciala; si exista functii definite de utilizator, scrise practic de programatori. Pentru ca acestea sa poata fi folosite este nevoie sa fie declarate (si implementate).
Exemplu de functie definita de utilizator:
```
';
for( $i = 0; $i < 12; $i++ ) {
# intrucat se cunosc nr de pasi, se foloseste structura repetitiva for
echo "\n";
}
echo '';
}
# mai jos vom folosi functia pentru a afisa un drop-down cu lunile anului:
echo 'Luna inceperii activitatii: ';
afisLuna();
Luna terminarii activitatii: ';
afisLuna();
?Rezultatul codului PHP este reprezentat mai jos:
> Luna inceperii activitatii:
Luna terminarii activitatii:
Am scris, asadar, o singura data codul care afiseaza lunile anului si l-am apelat de cate ori am avut nevoie. Apelarea functiei se face prin specificarea numelui urmat de paranteze. Intre paranteze se pot specifica parametrii, dupa cum vom vedea mai jos. Alternativ, o functie definita de utilizator se poate apela folosind instructiunea call_user_func
```
# vom rescrie ultima parte a codului
echo 'Luna inceperii activitatii: ';
call_user_func( 'afisLuna' );
Luna terminarii activitatii: ';
call_user_func( 'afisLuna' );
Instructiunea call_user_func este utila atunci cand numele functiei este furnizat de o variabila, cu ajutorul caruia se poate apela dinamic o functie. Exemplu:
function alta() {
print 'Azi e o zi obisnuita
```
';
}
# declar o variabila care sa aiba ca valoare numele functiei
$functie = 'unu';
# variabila $functie poate sa se schimbe in functie de diferite conditii
# in cazul nostru, daca ziua curenta e prima zi din luna, valoarea va fi 'unu'
if( date( 'd' ) == 1 ) $functie = 'unu';
else $functie = 'alta';
# la final apelez dinamic functia data de variabila
# Interpretorul nu stie exact care functie va fi - el doar executa ce-i transmite variabila
# eventual pot face niste validari:
# - function_exists verifica daca functia transmisa a fost definita
# - is_callable verifica daca variabila transmisa poate fi apelata ca functie
if( function_exists( $functie ) && is_callable( $functie ) ) {
call_user_func( $functie );
} else {
echo "Nu pot apela functia $functie";
}
// Rezultat (live): Azi e o zi obisnuita
?Foarte important de stiut este faptul ca variabilele definite in afara functiilor nu sunt disponibile in interiorul lor. Astfel, codul de mai jos nu va functiona asa cum ne asteptam:
Functia nu va afisa mesajul, asa cum v-ati fi gandit la prima vedere. Asta pentru ca ce este definit in afara functiei nu este recunoscut in interior. In mod similar, variabilele definite in interiorul unei functii se pierd si NU sunt disponibile in afara acesteia.
Exista totusi o modalitate prin care variabilele definite in afara unei functii sa fie 'aduse' in interiorul ei: folosind intructiunea global.
Daca e nevoie sa se foloseasca mai multe variabile globale in cadrul unei functii, acestea se pot specifica toate intr-o singura instructiune global:
```
global $a, $b, $Vector;
```
Foarte folositor.Multumesc frumos ce dreaq tot comentati aiurea: foarte util,bravo,felicitari, vezi sa nu uiti sa iei marar din piata....a comenta la un articol inseamna a te referii la subiectul acelui articol <?php$sir=array(9,5,7,6,8,3,4); function mic() { $sir=array(9,5,7,6,8,3,4); sort ($sir); print_r ($sir[0]); } print "cel mai mic numar din sir este: "; mic(); ?> Afiseaza: cel mai mic numar din sir este: 3
"exemplu de functie definita de utilizator"
## De ce sunt folosite functiile? 2
Printre avantajele folosirii functiilor, se numara:
* reutilizarea codului
Spre exemplu, daca este nevoie sa se execute aceeasi secventa de cod in mai multe parti ale unui programm sau script, pentru a nu se rescrie codul de fiecare data, se defineste o functie care este apelata de mai multe ori, asa cum am facut in primul exemplu de mai sus * modularizare
Odata cu aparitia functiilor (a subprogramelor, in general) s-a introdus si conceptul de modularizare care presupune impartirea (spargerea) problemei ce trebuie rezolvata in probleme mai mici. Fiecare modul problema mai mica reprezinta un subprogram, implementat intr-o functie care contribuie la rezultatul final. Spre exemplu, avem o operatie (relativ) complexa: afisarea inbox-ului unui utilizator. Aceasta problema poate fi impartita in parti mai mici/simple. Pentru fiecare parte s-ar defini cate o functie in loc sa se scrie un singur script foarte mare, iar la final codul va arata cam in felul urmator:
```
preluareDateAutentificare(); verificareDate(); preluareMesajeInbox(); afisareInbox();
```
* mentinerea usoara a codului si intelegerea mai usoara a logicii aplicatiei sau a scriptului
Acestea sunt urmari imediate ale primelor 2 puncte. Daca scriptul este structurat, impartit in bucati mai mici, in care aceleasi secvente de cod nu se repeta atunci va fi mai usor si de inteles si de modificat sau intretinut. Salut. Am dat peste acest tutorial superb si nu am incetat sa il citesc deloc. Am ajuns deocamdata la acest capitol.In ultimul timp m-a procupat optimizarea codului php daca intelegeti. As dori daca s-ar putea sa initiati un capitol care sa cuprinda mai multe indicii care sa duca spre functionarea mai rapida, optima si care sa consume mai putine resurse ale serverului. Felicitaru pentru munca depusa:) acum fac un curs de PHP din care nu reusesc sa inteleg nimictutorialele d-voastra imi sunt de mare ajutor ar fi si mai bine daca, pe langa exemple ar fi si exercitii ma refer la exercitii care sa aiba un inteles in lumea reala - noi la curs facem exercitii NUMAI cu a si b, 1 si 2 sau denumiri generice nu pot sa-mi reprezint locul in care as putea sa folosesc notiunile invatate - de exemplu: formulare, meniuri, etc
Adauga un comentariu la aceasta sectiune.
## Valori returnate. Parametri 7
De multe ori este nevoie ca o functie sa returneze o valoare. Majoritatea functiilor predefinite fac lucrul asta; spre exemplu empty returneaza TRUE sau FALSE in functie de starea si continutul unei variabile transmise ca parametru.
Si functiile definite de utilizator pot returna o valoare, cu ajutorul instructiunii return. Exemplu:
De asemenea, functiile pot primi date ce pot fi folosite in interiorul lor pentru diverse prelucrari. Aceste date de intrare sunt transmise sub forma de parametri.
Pentru ca o functie sa poata primi parametri, aceasta trebuie sa ii declare intre parantezele rounde, ca in exemplul de mai jos
Nota: intrucat functia minim() returneaza o valoare, ea poate fi folosita ca si cum ar fi un numar normal. De aceea apelul de mai sus este valid. Alte exemple valide sunt mai jos:
```
5 ) { echo "If-ul este True"; }
else { echo "If-ul este False"; }
O facilitate avansata oferita de limbajul PHP este folosirea valorilor predefinite pentru parametrii functiilor. Aceasta permite ca o functie sa nu fie apelata cu toti parametrii ei, urmand ca pentru valorile care lipsesc sa fie folosite valorile predefinite. Functia minim() definita mai sus poate fi rescrisa in felul urmator:
Declararea functiei de mai sus se traduce in felul urmator: daca functia minim() nu este apelata cu toti parametrii, atunci foloseste valoarea 1 pt $a si valoarea 2 pt $b in calculele din interiorul functiei. In acest caz, functia se poate apela in felul urmator:
felicitari pentru exemplele despre functii!
Foarte inteligibil...
unde gasesc functiile in aardvark topsites ?
sooper
Salut!La afisarea lunilor anului este o functie mult mai simpla decat for(); Un exemplu ar fi cam asa: $LuniAn = array ('Ian', 'Feb', 'Mar'); foreach ($LuniAn as $lunileAnului) { echo $lunileAnului, "<br />"; } Bafta in continuare, au multi de invatat de la voi (poate chiar si eu la un moment-dat) :) Articulat: parametrii. Exemplu: Parametrii au fost introdusi.Neatriculat: parametri. Exemplu: Lipsesc niste parametri. *Iaur* "Neatriculat" :)Daca tot esti pus pe observatii. *<NAME>* Exemplul cu lunile anului a fost dat ca sa inteleaga, mai usor cei ca tine. Continutul functiei poate fi mult mai complex, (si de regula e) Si nu in ultimul rand, felicitari pentru tutorial si fi sigur ca celor care vreau (si pot) sa invete php le e foarte util.
Adauga un comentariu la aceasta sectiune.
## Facilitati avansate referitoare la functii 3
Pe langa functionalitatile standard, explicate mai sus, limbajul PHP dispune de alte caracteristici mai avansate. Astfel, pot fi definite functii ce pot primi oricati parametri este nevoie, pot fi executate functii de fiecare data la terminarea executiei unui script PHP, etc. Acestea nu sunt insa lucruri folosite de zi cu zi, asa ca prezentarea lor depaseste scopul acestui site. Nu trebuie sa le invatati acum, ci doar sa stiti ca exista.
Mai jos amintite sunt cateva din functiile avansate ce pot fi folosite:
* call_user_func - alternativa pentru apelul simplu al unei functii definite de utilizator; numele functiei poate fi stocat intr-o variabila, programatorul putand astfel apela functii diferite in situatii diferite
* call_user_func_array - la fel ca ma sus, doar ca este folosita atunci cand functia ce trebuie apelata are mai mult de un parametru
* function_exists - folosita pentru a verifica daca o functie este definita
* create_function - folosita pentru a defini o functie 'on the fly', atunci cand codul PHP se executa
* register_shutdown_function - folosita pentru a specifica o functie care sa se execute la finalul executiei codului PHP
* func_num_args, func_get_args, func_get_arg - functii ajutatoare folosite in cazul functiilor apelate cu un numar variabil de parametri
Facilitatile avansate legate de functii probabil ca nici nu va vor folosi pana nu veti incepe sa lucrati la aplicatii mai complexe (de multe ori nici in acest caz). E suficient doar sa stiti ca exista.
Cum fac o functie sa fie apelata la un interval de timp. Am vazut ca ati precizat in exemple. Mi-ar fi de folos.
Sincer, am retractat informatia. Se poate apela o functie cu o intarziere, sau mai exact spus se poate amana (intarzia) executia unui script cu un numar de secunde, dar nu se poate apela o functie la intervale regulate de timp, in mod continuu. Arhitectura si mediul de executie nu o permite (ar insemna ca un request catre o pagina web sa poata dura la nesfarsit). Imi pare rau pentru confuzie :)
Date: 2009-03-19
Categories:
Tags:
## Superglobals, variabile speciale 4
PHP dispune de cateva variabile implicite. Acestea sunt pre-populate de PHP in momentul executiei, deci nu trebuie definite sau initializate. Sunt disponibile in orice portiune a codului si in orice script PHP si pot fi folosite fara sa fie nevoie de alte pregatiri. Anumite variabile (ca de exemplu $_POST) au valori doar in anumite situatii (spre exemplu, doar atunci cand sunt folosite formulare).
Mai jos sunt descrise pe scurt aceste variabile, cu mentiunea ca in lectiile urmatoare vor fi folosite in exemple din care veti intelege (usor) rostul lor.
* $_GET
Vector asociativ ce contine parametrii transmisi prin URL sau printr-un formular. De exemplu, daca pagina noastra PHP se numeste test.php, un URL de genul > http://localhost/test.php?nume=Alex&varsta=12&ocupatie=elev
ar popula variabila $_GET cu urmatoarele valori:
```
print_r( $_GET ); /* afiseaza: Array ( [nume] => Alex [varsta] => 12 [ocupatie] => elev ) */
```
Nota: numele parametrilor au rol de cheie in vectorul $_GET. Vectorul $_GET este de asemenea populat la folosirea formularelor GET. * $_POST
Similar cu $_GET, doar ca parametrii sunt transmisi prin formulare (forms) - vezi lectia Formulare pentru detalii. * $_REQUEST
Inglobeaza atat $_GET cat si $_POST. * $_SESSION
Folosita pentru a defini date ce sunt disponibile atata timp cat utilizatorul acceseaza site-ul, indiferent de paginile vizualizate. In mod normal o variabila este definita doar cand un utilizator cere o pagina. La finalul executiei scriptului, valorile variabilelor se pierd (inclusiv variabile $_GET, $_POST, etc). Daca sunt puse in vectorul $_SESSION, valorile pot persista si dupa terminarea executiei scripturilor. Vezi lectia Sesiuni pentru detalii. * $_SERVER
Furnizeaza informatii despre server, pagina ceruta si utilizatorul care acceseaza pagina. `print_r( $_SERVER );` > /* afiseaza ceva asemanator cu ce e mai jos: Array ( [REDIRECT_STATUS] => 200 [HTTP_VIA] => [HTTP_COOKIE] => [HTTP_REFERER] => http://php.punctsivirgula.ro/ [HTTP_USER_AGENT] => Mozilla/5.0 ... [HTTP_HOST] => php.punctsivirgula.ro [HTTP_ACCEPT] => text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 [HTTP_ACCEPT_LANGUAGE] => ro-ro,ro;q=0.8,en-us;q=0.6,en-gb;q=0.4,en;q=0.2 [HTTP_ACCEPT_CHARSET] => UTF-8,* [HTTP_KEEP_ALIVE] => 300 [HTTP_CACHE_CONTROL] => max-age=0, max-age=0 [HTTP_CONNECTION] => Keep-Alive [PATH] => /bin:...:/usr/bin:/bin [SERVER_SIGNATURE] => Apache/2.2.9 (Unix) ... at php.punctsivirgula.ro Port 80 [SERVER_SOFTWARE] => Apache/2.2.9 (Unix) ... [SERVER_NAME] => php.punctsivirgula.ro [SERVER_ADDR] => 89.xx.xx.xx [SERVER_PORT] => 80 [REMOTE_ADDR] => 69.xx.xx.xx [DOCUMENT_ROOT] => /.../ [SERVER_ADMIN] => webmaster@... [SCRIPT_FILENAME] => /.../index.php [REMOTE_PORT] => 35423 [REDIRECT_QUERY_STRING] => [REDIRECT_URL] => /arrays/ [GATEWAY_INTERFACE] => CGI/1.1 [SERVER_PROTOCOL] => HTTP/1.1 [REQUEST_METHOD] => GET [QUERY_STRING] => [REQUEST_URI] => /arrays/ [SCRIPT_NAME] => /index.php [PHP_SELF] => /index.php [REQUEST_TIME] => 1250589158 [argv] => Array() [argc] => 0 ) */
* $_ENV
Afiseaza informatii despre mediul curent in care este instalat si ruleaza interpretorul PHP, despre scriptul curent, etc * $_COOKIE
Folosita pentru a vedea elementele de tip Cookies de la site-ul/pagina curenta, disponibile pe calculatorul utilizatorilor - vezi lectia Cookies pentru detalii. * $_FILES
Folosita la incarcarea fisierelor pe server (file uploads) - vezi lectia File upload pentru mai multe detalii
Desi toate aceste variabile au si nume alternative, este recomandat sa se foloseasca in forma in care au fost prezentate mai sus.
Am facut doar o pagina test.php, invatata la notiuni de baza, restul a devenit de neinteles pe parcurs. Consider ca acest curs nu este totusi pt incepatori, deoarece nu toate informatiile sunt dedicate unor persoane care o iau de la o. Deci, acest curs este foarte putin explicat si prea pe scurt.!
Eu ca si majoritatea programatorilor incepatori care stiu html si vor sa treaca la alt nivel vreau sa stiu doar atat:Daca am spre exemplu un site in html de stiri , si nu vreau sa mai stau sa bag in fiecare zi cate o stire(o poza si un text) in html, vrea sa imi fac un panou de admin si sa introduc de acolo noile stiri. Scurt si la obiect. In acest curs momentan nu am reusit sa desprind mare lucru. Salut. Hai sa o luam in ordinea problemelor puse de tine:1. Prea complicat. Incarcam sa oferim explicatii cat de detaliate putem dar in acelasi timp ne ferim sa cadem in extrema cealalta cand prea multe detalii indeparteaza cititorul de la ideea de baza. Daca unele lucruri sunt explicate prea pe scurt, lucrul asta este facut din 2 motive: - acel aspect este detaliat mai tarziu (printr-un exemplu sau prin alte explicatii) - indemnam cititorul sa caute singur mai multa informatie, dezvoltand acel spirit de a cauta informatiile nu de a le cere (daca ai observat, toate instructiunile de PHP au un link catre manualul oficial - acolo gasesti explicatii detaliate si multe alte exemple legate de instructiunea respectiva) Pe de alta parte, in pagina de prezentare (http://php.punctsivirgula.ro/?despre) sunt cateva intrebari comune si raspunsurile aferente in care sunt precizate putinele conditii pentru a putea folosi (cu succes) acest site. Citeste-le bine si vezi daca "te incadrezi". Daca totusi consideri ca ar trebui ca o anumita parte din site sa fie explicata mai pe larg (si crezi ca si altii ar beneficia de asta), atunci poti sa trimiti pe email (sau intr-un alt comentariu) care este pagina/sectiunea in cauza si noi vom vedea ce e de facut. 2. Admin Trecerea de la HTML la PHP nu este chiar atat de usoara cum pare la prima vedere. Asta din cauza ca PHP este un limbaj de *programare* pe cand HTML este doar un limbaj de *marcare* (mark-up language). Adica HTML este folosit pentru a afisa informatie, ce cand PHP o "proceseaza". Lectiile de pe site sunt grupate pe anumite categorii si cresc gradual in dificultate. N-o sa reuseasca nimeni sa invete din prima zi sa faca o interfata de aministrare ca cea descrisa de tine (poate reusi daca foloseste un cod existent - dar in acest caz nu intelege mare lucru ce se intampla). Scopul acestui site este sa ofere cunostintele de baza care sunt vitale in programarea PHP - cunostinte pe care cititorii le pot pune singuri in practica. Pe langa asta, pe parcurs ce lectii noi sunt adaugate pe site, veti gasi si exemple noi care pot oferi o idee despre cum trebuie facute anumite lucruri (cum ar fi interfata de admin) Sper ca aceste explicatii aditionale sa-ti fie de folos. Ca idee generala, site-ul nu este complet inca, ceea ce inseamna ca alte exemple vor fi adaugate in viitor (iar un model de CMS este pe lista) Numai bine! Invat de pe acest site.Sunt foart bucuros sa stiu ca exista un site in limba romana.Este bine structurat iar explicatiile sunt foarte utile, corecte si usoare. Gramatica, atentia la detalii, si tragerea atentiei prin diverse metode asupra ceva, precum scrisurile bolduite, sunt o metoda eficienta de a putea intelege rapid. Categoric practica conteaza mult iar sarirerea etapelor din acest proces de invatare face in mare parte ca lucrurile expuse a nu fi intelese. Eu multumesc foarte mult pentru ca si dupa atatia ani de cand este construit acest site, este inca pus la dispozitie si este de folos si in aceasta zi. Felicitari celor care si-au dedicat timpul pentru a pune asemenea continut fara beneficii la liber pentru oricare doritor sa invete.
Adauga un comentariu la aceasta sectiune.
## Constante predefinite 1
In PHP exista o serie de constante ce pot fi folosite in orice moment. Ca si in cazul vectorilor superglobali, acestea sunt populate de catre interpretorul PHP si sunt disponibile in toate fisierele sursa.
* PHP_EOL - contine caracterul new-line ce poate fi afisat pentru a formata rezultatul unui script pe mai multe linii.
```
echo "<NAME>", PHP_EOL, "<NAME>"; // alternativ se poate folosi \n, desi cele 2 nu sunt mereu echivalente echo "<NAME>", "\n", "<NAME>";
```
* TRUE, FALSE - constantele de tip boolean ce pot fi folosite la testele din instructiunea if
* NULL - referinta nula, utila in "stergerea" unei variabile
* PHP_OS - contine identificatorul sistemului de operare pe care ruleaza interpretorul PHP
* PHP_VERSION - contine versiunea completa a interpretorului
* DEFAULT_INCLUDE_PATH - contine locatiile unde sunt cautate scripturile atunci cand sunt incluse cu include sau require; de obicei, aceasta constanta contine directorul curent si alte cateva locatii
* E_ERROR, E_WARNING si altele - contin codurile diverselor tipuri de erori ce pot sa apara
Exista si alte constante in distributia de baza a PHP, plus multe altele, definite cu fiecare extensie incarcata, dar nu sunt atat de uzuale ca acestea.
Salutare, m-am apucat recent sa invat PHP online (si faceti o treaba buna), dar PHP_EOL la mine nu merge. La fel si in cazul "n".Doar <br> functioneaza...
Adauga un comentariu la aceasta sectiune.
## Constante magice 0
Limbajul PHP dispune de o serie de expresii numite "constante magice" care au valori diferite in functie de anumiti parametri (cum ar fi momentul si codul-sursa in care sunt folosite). Denumirea de 'constanta' este improprie, pentru ca, asa cum stiti deja, o constanta nu-si poate schimba valoarea. Nici 'variabile' nu pot fi numite deoarece nu au nevoie de declarare sau initializare, asa ca s-a recurs la un compromis: denumirea 'constante magice' (magic constants).
Cele mai uzuale sunt __LINE__, __FILE__, __FUNCTION__ si __CLASS__. Mai noile __DIR__, __METHOD__, __NAMESPACE__ sau __TRAIT__ sunt mai rar folosite datorita faptului ca au nevoie de o versiune mai noua a interpretorului PHP (cel putin versiunea 5.0 sau chiar 5.4)
Mai jos sunt explicate aceste constante.
* LINE
Returneaza linia curenta a scriptului PHP. * FILE
Contine calea completa catre scriptul PHP care se executa. Daca este folosita intr-un fisier inclus atunci numele acestui fisier este returnat. * FUNCTION
Returneaza numele functie din interiorul careia se foloseste constanta. * CLASS
Returneaza numele clasei curente. * DIR
Contine numele directorului in case este salvat scriptul curent. Aceasta constanta este echivalenta cu dirname(__FILE__). * METHOD
Contine numele metodei de clasa din care se foloseste constanta. Se poate folosi numai cu clase. * NAMESPACE
Contine numele spatiului de lucru curent (name-space). * TRAIT
Contine numele trasaturii curente (trait).
Nimic de afisat.
Date: 2009-12-15
Categories:
Tags:
## Formulare 4
Formularele sunt elementele prin intermediul carora utilizatorii trimit date catre server. Pe o pagina web pot fi introduse diferite tipuri de informatii (parole, comentarii, mesaje, etc). Toate aceste date sunt transmise catre servere prin intermediul formularelor (in engleza "form").
Odata transmise, datele pot fi prelucrare folosind PHP. Astfel, ele pot fi salvate intr-o baza de date, in fisiere, trimise prin email, sau doar afisate inapoi pe o pagina. In cele ce urmeaza vor fi prezentate elemente de baza ale folosirii formularelor in PHP. Ca o nota, formularele sunt intalnite in orice aplicatie web (cele mai uzuale sunt poate paginile de login, contact sau inregistrare). Cert este ca sunt esentiale in folosirea internetului si este foarte important pentru un programator web sa stie sa foloseasca form-uri in PHP.
Pe scurt, pentru a putea fi introduse date pe o pagina web trebuie sa existe un formular. Acesta se defineste in HTML prin tag-ul form. Un form trebuie sa aiba specificate obligatoriu 2 atribute: "action" si "method".
Atributul action contine locatia unde vor fi transmise datele, in esenta numele fisierului de pe server ce va prelucra requestul (de regula un fisier PHP). Acest atribut poate fi gol, specificand ca datele vor fi transmise catre acelasi script ce afiseaza formularul (fisierul curent). Atributul method specifica metoda de acces, dupa cum este descrisa mai jos.
Formularele sunt strans legate de conceptul de "Metoda de acces" (Request Method). De fapt, in functie de modul in care se face cererea catre server, exista doua tipuri de formuri: formulare GET si POST.
pace voua!
Greseala in propozitia "Acesta se defineste in HTML prin tag-ul from. " este vorba de tagul FORM, pare banal, dar cine citeste pt prima data n-o sa stie de ce nu ii merge
Multumesc Deea, am corectat.
Pe o pagina web pot fi introduse diferite tipul(tipuri) de informatii! succes!
## Request Methods: GET, POST si altele 0
Metodele de acces reprezinta modul in care cerintele pentru o pagina web si alte informatii aferente sunt transmise de la browser la serverul web. Exista 8 metode definite, in schimb doar 2 dintre ele sunt cel mai des folosite in dezvoltarea paginilor web. Mai multe detalii pot fi citite pe pagina de Wikipedia.
* GET
Este cea mai uzuala metoda, folosita implicit de browsere pentru a trimite cereri catre servere. Majoritatea paginilor pe care le vizualizam pe internet sunt obtinute in urma unei cereri GET. De exemplu, scrierea unui URL in bara de adrese a browserului sau accesarea unui link, sau a unei imagini, toate sunt request-uri de tip GET. Aceasta metoda este utilizata, asadar, pentru a "cere" o pagina de la server (in engleza "get data"). Totusi, odata cu cererea se pot transmite si mici bucati de informatii catre server. Aceste informatii transmise la momentul cererii pot fi introduse de utilizatori (intr-un form) si sunt adaugate la finalul URL-ului sub forma "pagina.php?parametru=valoare". * POST
Opus metodei GET, POST este folosita pentru a transmite informatii catre server (in engleza "post data"). Spre deosebire de GET care permite doar o cantitate limitata de date sa fie transmisa de la client (browser) la serverul web, POST dispune de limite mai generoase, fiind standardul de transmitere a datelor. Astfel, upload-ul unui fisier pe server, salvarea unui post pe blog, etc - toate sunt requesturi de tip POST.
Mai jos sunt descrise cele 2 tipuri de formulare si cum pot fi accesate datele lor in PHP. Apoi, pe pagina urmatoare, GET vs. POST puteti afla diferentele concrete dintre ele si cum puteti determina ce tip de form sa folositi pe paginile voastre.
## Formulare GET 1
Acest tip de formulare permite utilizatorilor sa transmita informatii aditionale atunci cand cer o pagina web.
Exemplu de formular GET cu 3 campuri text.
Datele transmise de utilizatori la un request de tip GET sunt disponibile pentru prelucrare in PHP folosind variabila globala $_GET. Fiecare camp al formularului reprezinta o componenta a variabilei $_GET.
/*
$_GET va avea urmatoarea structura:
$_GET = array(
"camp1" => "",
"camp2" => "",
"camp3" => ""
)
*/
```
da da da
## Procesarea formularelor GET 0
Pentru a exemplifica modul de procesare al formularelor GET in PHP, vom crea o mica aplicatie care preia numele utilizatorului (printr-un formular) si il afiseaza in pagina.
### Formularul HTML
Cream un fisier nou care, intr-o prima faza, va contine formularul HTML.
`` Acesta da utilizatorilor posibilitatea sa scrie numele in caseta afisata si sa apese pe butonul "Trimite".
### Codul PHP de procesare a formularului
Urmatorul pas este definirea codului care va prelua numele si-l va afisa. Pentru simplificare, secventa de cod PHP se scrie in acelasi fisier ca si codul HTML de mai sus, la sfarsit. Daca s-ar pune intr-un fisier separat, atunci atributul action al formularului ar trebui sa specifice numele acelui fisier.
### Pasii procesarii
In acest fisier, unde avem amestecat cod HTML si cod PHP, interpretorul PHP mai intai returneaza codul HTML ca atare (el fiind static) si apoi evalueaza secventa de cod PHP. Daca secventa de cod va afisa ceva, atunci rezultatul afisarii va fi returnat impreuna cu formularul HTML static. O explicatie mai detaliata a pasilor executati in procesarea unui formular gasiti in lectia Exemple de formulare.
### Testare formular online
Formularul, asa cum apare in browser, este disponibil mai jos:
### Alternativa la formulare GET: parmetrii URL
Se observa ca atunci cand se foloseste metoda GET, toate elementele formularului (care au specificat un nume!) apar in URL-ul paginii urmatoare (dupa submit) sub forma:
> pagina.php ? element1=valoare1 & element2=valoare2
Astfel, in cazul aplicatiei noastre, acelasi efect ca si folosirea formularului s-ar obtine daca s-ar accesa direct pagina cu parametrul numele=ceva (click pentru test). Nota: parametrul se numeste numele pentru ca asa l-am numit noi in formular: <input type="text" name="numele" /Asta confirma faptul ca toate requesturile pe care le facem in mod obisnuit catre un server web sunt de tip GET (de exemplu cand accesam direct in browser www.google.ro, sau cand facem click pe un link - toate sunt cerinte GET catre server) - si ca sunt echivalente cu folosirea unui formular GET.
Asadar, fie ca folosim un formular cu metoda GET, fie ca scriem direct in bara de adrese a browserului, rezultatul este acelasi: un request de tip GET ai carui parametrii ce sunt accesibili in PHP prin vectorul asociativ $_GET.
Dezavantajele metodei GET sunt, printre altele, cantitatea mica de date ce pot fi transmise (de obicei parametrii au valori mici, de cateva caractere) si restrictiile ce se impun valorilor parametrilor (acestia trebuie sa formeze, impreuna cu numele serverului si calea ceruta, un URL valid). Aceste dezavantaje pot fi eliminate folosind formulare POST.
## Formulare POST 1
Un alt tip de request folosit in practica este POST. Acesta permite transferul unei cantitati mult mai mari de date (de ordinul gigaoctetilor), in timp de metoda GET este limitata la cativa octeti. Astfel, prin POST se pot transfera fisiere catre web-servere si se pot transmite texte foarte lungi, fara a ne face griji de formatul datelor transmise (care pot contine caractere speciale, pot fi in format binar).
Exemplu de formular POST cu un camp text si un camp pentru upload de fisier.
Datele transmise de utilizator sunt disponibile in PHP prin intermediul variabilei globale $_POST. Astfel, daca formularul contine 2 campuri numite "camp1" si "fisier1" (ca in exemplul de mai sus), variabila $_POST va avea doua componente ce pot fi accesate prin $_POST['camp1'] si $_POST['fisier1'].
/*
$_POST va avea urmatoarea structura:
$_POST = array(
"camp1" => "",
"fisier1" => array(...)
)
*/
```
Introduceti exemple
## Procesarea formularelor POST 6
Pentru a exemplifica modul de procesare al formularelor POST in PHP, vom crea o alta aplicatie care preia numele utilizatorului (printr-un formular) si un text foarte lung si apoi afiseaza pe pagina numele si lungimea textului.
Structura codului si logica de procesare sunt similare ca in exemplul cu formulare GET. Diferenta in acest caz este ca se va folosi variabila $_POST pentru a prelua datele transmise.
Continutul fisierului PHP este redat mai jos.
```
";
}
if( isset( $_POST[ 'textfl' ] ) && !empty( $_POST[ 'textfl' ] ) ) {
print "Ai trimis " . strlen( $_POST[ 'textfl' ] ) . " caractere";
}
}
?>
```
Formularul, asa cum apare in browser, este disponibil mai jos:
Se observa ca elementele formularului (numele si textfl) nu mai sunt transmise in URL, ci printr-un alt mecanism. Daca pagina este re-afisata (cu Refresh/F5) browserul va afisa o notificare, cerand confirmarea ca datele sa fie retrimise. Acest lucru confirma, totodata, ca pagina a fost afisata in urma unui request de tip POST. Foarte interesant mi se pare ca se poate extrage valoarea unui control dupa nume fara a folosi variabilele _POST['nume _variabila'] .Dar asta numai dupa primul submit. Caut demult o documentatie seriosa..in romana despre form-uri...nu numa generalitati. 2) Cum ai inserat aceste sectiuni auto-expandabile? Cu CSS si javascript? Ai putea sa mi le dai si mie ? Merci beaucoup
Faza e ca daca retrimit datele, prin referesh sau back-forward, si am scris ceva intr-o baza de date, se reia procesul aiurea. Cum fac sa evit asta? Mersi.
Scuze daca o sa pun o intrbare cam idioata... Metoda POST merge doar pentru a extrage datele in aceeasi pagina din care s-a facut transmiterea?Am facut in felul urmator si nu imi iese: in fisierul index.html: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <form id="afiseaza2" action="test.php" method="post"> Nume: <input type="text" name="numele" /> <br /> Text foarte lung: <textarea name="textfl" rows="3" cols="20"></textarea> <br /> <input type="submit" value="Trimite" /> </form> </body> </html> iar in test.php am inserat: <?php # mai intai verificam daca a fost trimis formularul if( isset( $_POST ) && !empty( $_POST )) { # da, a fost trimis; verificam fiecare componenta if( isset( $_POST[ 'numele' ] ) && !empty( $_POST[ 'numele' ] ) ) { print "Salut, {$_POST[ 'numele' ]}!<br />"; } if( isset( $_POST[ 'textfl' ] ) && !empty( $_POST[ 'textfl' ] ) ) { print "Ai trimis " . strlen( $_POST[ 'textfl' ] ) . " caractere"; } } ?> Unde am gresit? (nu afiseaza nimic cand apare pagina test.php Salut Mihai.Da, se poate sa fie pagini diferite. Ce ai spus tu, pare sa fie corect si nu stiu de ce nu iti merge. Vezi totusi sa nu ai vreo greseala de sintaxa in test.php (poate ai uiat vreo virgula pe undeva) si verifica sa fie test.php in acelasi folder cu index.html Am schimbat exemplul de pe site sa foloseasca un fisier separat. Astfel, daca apesi "Trimite" o sa se duca pe form-test.php care functioneaza corect. Poti descarca form-test.php de la locatia asta: http://php.punctsivirgula.ro/public/resurse/download.php?file=exemple/form-test.php Sper sa reusesti!
Ciudat....acum merge codul care l-am postat mai devreme :-??
Aici mai mergea la final trecut un cod care sa execute si in cazul in care nu se intalneste conditiile de mai sus(adica nu se completeaza nimic si se apasa butonul.Iata propunerea mea :<?php # mai intai verificam daca a fost trimis formularul if( isset( $_POST ) && !empty( $_POST )) { # da, a fost trimis; verificam fiecare componenta if( isset( $_POST[ 'numele' ] ) && !empty( $_POST[ 'numele' ] ) ) { print "Salut, {$_POST[ 'numele' ]}!<br />"; } if( isset( $_POST[ 'textfl' ] ) && !empty( $_POST[ 'textfl' ] ) ) { print "Ai trimis " . strlen( $_POST[ 'textfl' ] ) . " caractere"; } } if (empty($_POST['numele'])&& empty($_POST['textfl'])) { print "<font color='red'>Input name</font>"; print "</br>"; print "<font color='red'>Input some text!</font>"; } ?Adauga un comentariu la aceasta sectiune.
Date: 2011-09-05
Categories:
Tags:
### Formulare POST si GET
Asa cum am vazut in lectia anterioara exista 2 tipuri de formulare. Din punctul de vedere al utilizatorului final (cel care foloseste browser-ul si completeaza formularul pentru a transmite date catre server) metoda folosita (POST vs. GET) nu conteaza prea mult, in sensul ca modul de transmitere al datelor este transparent. Din punctul de vedere al celui care construieste formularul si care prelucreaza datele primite pe server (folosindu-se de un script PHP) exista diferente notabile ce trebuie luate in calcul.
## Cum determin ce tip de formular trebuie sa folosesc (GET sau POST)? 1
Cele 2 tipuri de formulare prezinta fiecare avantaje si dezavantaje. Mai jos sunt prezentate cateva dintre caracteristicile fiecarui tip. In general, avantajul formularelor POST este acela ca pot manipula cantitati mari de date iar al formularelor GET ca sunt usor de folosit/retrimis.
### GET
* Datele transmise catre server prin GET apar la finalul URL-ului, asa cum au fost introduse; acest lucru ar putea fi un dezavantaj cand este vorba de date senzitive, dar ajuta la modificarea usoara a datelor pentru retrimitere
* Cantitatea de date GET depinde de lungumea maxima permisa a URL-ului, care in Internet Explorer este de 2 083 caractere. Asadar prin GET nu se pot transmite prea multe date
* Rezultatul unui request GET poate fi preluat din cache-ul browser-ului sau al proxy-ului web. Trebuie ca URL-ul sa fie diferit la fiecare request nou pentru a fi siguri ca datele reale sunt preluate
* Caracterele speciale pot fi codificate folosind atributul enc-type al formularului; dezavantajul este ca aceasta codificare va reduce cantitatea de date ce pot fi transmise (ex. 3 caractere japoneze sunt reprezentate cu ajutorul a 42 caractere normale: %26%2312454%3B%26%2312455%3B%26%2312502%3B)
* In PHP, elementele formularului sunt disponibile prin intermediul variabilei globale $_GET (sau prin $_REQUEST)
### POST
* Cantitatea de date ce poate fi transmisa prin POST poate fi restrictionata doar de catre serverul web, desi nu exista o limitare reala
* Datele transmise prin POST nu apar in URL si nu pot fi alterate usor, ceea ce ofera un oarecare grad de securitate
* In mod implicit, datele preluate prin POST nu sunt puse in cache-ul de la browser sau proxy-server; astfel folosind aceasta metoda vor fi afisate intotdeauna datele reale
* Codificarea caracterelor speciale se poate realiza ca si la GET folosind atributul enc-type al formularului (application/x-www-form-urlencoded); avantajul este ca nu exista limitari de cantitate a datelor
* In PHP, elementele formularului sunt disponibile prin intermediul variabilei globale $_POST (sau prin $_REQUEST)
Pe langa considerentele de mai sus, trebuie avuta in vedere si o recomandare generala: GET ar trebui folosit pentru operatiile care nu modifica nimic pe server, in timp ce POST ar trebui folosit pentru operatiile de modificare/actualizare/stergere.
Spre exemplu, atunci cand se face o cautare intr-o baza de date si (doar) se afiseaza niste rezultate, ar trebui folosit un formular GET. De asemenea, cand se realizeaza verificari ce nu au ca efect modificarea bazei de date sau a vreunui fisier de pe server, se poate folosi tot GET. Atunci cand, in schimb, trebuie facute modificari se recomanda folosirea metodei POST.
Mai mult decat atat, pentru a evidentia tendinta de folosire a POST pentru modificari de date, browserele afiseaza o avertizare cand se incearca reimprospatarea (refresh-ul) paginilor aceesate prin POST. Astfel, se cere confirmarea utilizatorului ca datele introduse sa fie trimise din nou catre server. Aceasta confirmare este necesara pentru a evita efectuarea unor tranzactii (cum ar fi plati on-line) de doua sau mai multe ori.
Portiuni din acest text au fost preluate cu permisiunea autorului, de la http://weblogs.asp.net/mschwarz/archive/2006/12/04/post-vs-get.aspx.
O intrebare! Cum pot sa fac pe un site o pagina de login prin care fiecare utilizator poate avea o pagina personala dupa ce se logheaza? adica se logheaza si doar el isi poate vedea pagina. Multumesc.
## De ce conteaza tipul de formular? 0
Este important sa se aleaga metoda corecta (POST sau GET) atunci cand:
* se realizeaza inserari in baza de date sau modificari pe server
* se vor transmite cantitati mari de date
* utilizatorii pot sa modifice usor datele transmise (cum ar fi un cuvant de cautat)
* utilizatorii trebuie informati/atentionati atunci cand retrimit aceleasi date catre server (NB: la metoda POST, browser-ul afiseaza o notificare atunci cand se incearca Refresh)
* este important ca datele afisate sa nu fie preluate din cache (POST)
Fiecare situatie in care este necesar un formular ar trebui analizata pentru a determina tipul de metoda care trebuie folosita. In practica metoda POST se foloseste la formulare de inregistrare/login, formulare de contact sau de plati online, formulare de upload si altele, in timp ce metoda GET este folosita pentru formulare simple de confirmare, de cautare, de abonare/dezabonare la un newsletter, s.a.
## Headere 1
Headerele sunt elemente prin care browser-ul si serverul web comunica in fundal pentru a afisa o pagina web in bune conditii.
Exista 2 tipuri de headere: cele emise de browser (headere de request) si cele emise de server (headere de raspuns).
### Request headers
De fiecare data cand un utilizator acceseaza o pagina web, browserul trimite catre server cantitati mici de date, sub forma request headers (sau, intr-o traducere destul de rar folosita la noi, antetelor de cerere). Aceste antete cuprind detalii despre pagina care a fost solicitata, modul de transfer a ei, precum si informatii despre capabilitatile browser-ului.
De exemplu, cand ati accesat aceasta pagina, browser-ul dvs a trimis catre server urmatoarele headere de request:
> GET /http/ HTTP/1.1 Host: php.punctsivirgula.ro Connection: close User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; ro; rv:1.9.1) Gecko/20090624 Firefox/3.5 Accept-Charset: ISO-8859-1,UTF-8;q=0.7,*;q=0.7 Cache-Control: no Accept-Language: de,en;q=0.7,en-us;q=0.3
Headerele sunt trimise de browser in mod implicit, utilizatorul nu trebuie sa faca ceva anume pentru asta. De asemenea, headerele nu pot fi (usor) modificate inainte de a fi trimise.
### Response headers
In raspuns la request headers primite de la browsere, serverele web trimit inapoi 2 tipuri de informatie:
* headere de raspuns (response headers)
* continutul efectiv al paginii solicitate (continut ce poate fi construit/modificat prin intermediul PHP)
Headerele de raspuns contin informatii despre pagina solicitata (cum ar fi: daca exista sau nu, dimensiunea ei, tipul de continut, etc).
De exemplu, in urma solicitarii acestei pagini in browser, serverul nostru a trimis urmatorul raspuns:
> HTTP/1.1 200 OK Date: Fri, 17 Jul 2009 14:54:18 GMT Server: Apache/2.2.9 (Unix) mod_ssl/2.2.9 OpenSSL/0.9.8b PHP/5.2.6 mod_scgi/1.12 X-Powered-By: PHP/5.2.6 Set-Cookie: PHPSESSID=h8144gj13180725c2aufkrd397; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Connection: close Transfer-Encoding: chunked Content-Type: text/html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Ghid PHP pentru incepatori - headere. cookies. modificarea requestului HTTP</title> .... </htmlHeaderele reprezinta prima parte, separate de continut prin 2 linii goale (reprezentate prin caracterele \r\n\r\n). Din acest raspuns, browserul "stie" sa interpreteze headerele si sa afiseze in pagina doar continutul, astfel ca antetele nu vor fi niciodata vizibile utilizatorului final.
La fel ca in cazul antetelor de cerere, cele de raspuns sunt trimise in mod automat de catre server, inainte de a trimite continutul si nu este nevoie ca programatorul sa intervina. Exista insa situatii cand trebuie ca anumite antete sa fie modificate. Acest lucru este posibil cu ajutorul limbajului PHP.
PHP ofera posibilitatea modificarii headerelor de raspuns prin intermediul functiei header.
```
# trimiterea unui semnal (Not-Found, Status OK, etc)
header( 'HTTP/1.0 404 Not Found' ); // spune browserului sa afiseze un mesaj de eroare
header( 'HTTP/1.0 200 OK' ); // anunta ca pagina exista si va fi trimisa catre browser
# redirectionare - "spune" browserului sa "mearga" la alta adresa
header( 'Location: http://www.punctsivirgula.ro/' );
# specificarea tipului paginii ce se va afisa - in urma acestor apeluri, browser-ul va
# astepta alte tipuri de continut decat pagini HTML
header( 'Content-type: application/pdf' ); // browserul va astepta sa fie trimis un PDF
header( 'Content-type: text/css' ); // fisier css
```
- scuze pentru name-email, dar vreau sa raman anonim.- tnx pentru munca depusa sa faci acest tutorial, este foarte util. - daca vreodata o sa fac ceva bani in viata asta mea amarata, te voi recompensa :)
Adauga un comentariu la aceasta sectiune.
## Aplicatie: Sa se redirectioneze un utilizator la o anumita pagina 8
```
home
search
other
```
Important! Inainte de folosirea instructiunii header() nu trebuie sa fie afisat nimic (cu print, echo, sau scriind cod HTML). Se intampla adesea ca si un spatiu ramas la sfarsitul unui fisier inclus cu include sau require sa opreasca functia de la a functiona corect. De exemplu, urmatoarele secvente de cod nu sunt valide:
<?phpprint "Nu mai pot trimite headere"; header( "Location: http://www.google.ro" ); exit; ?imi da eroare Object not found!
Exemplul nevalid functioneaza deoarece PHP face un fel de buffering la textul afisat cu "print". In mod normal, daca dupa instructiunea print se pune un flush() atunci n-ar mai merge.
"Se intampla adesea ca si un spatiu ramas la sfarsitul unui fisier inclus cu include sau require sa opreasca functia de la a functiona corect."Pentru a nu intampina astfel de probleme eu nu folosesc "?>" decat in interior, nu si la sfarsit. Astfel se ia in considerare totul pana la ultimul caracter(si nu pana la ?>). Eventualele spatii albe sau 'Enter-uri' de la sfarsit nu vor fi luate in considerare.
Aceasi eroare o am si eu la click pe search, home sau other ca si sergiu
Fisierul vostru trebuie sa se numeasca page.php pentru ca primul exemplu sa functioneze. Altfel veti primi acea eroare.
Aceasta eroare Object not found apare pentru ca http://www.google.ro nu e scris correct!Incercati cu asta: https://www.google.ro Nu uitati sa adaugati acel https pentru site-uri securizate!
In exemplul tau link-ul doi se numeste tot home, ar trebui sa se numeasca search
## Aplicatie: Sa se creeze o pagina care ofera spre download un document PDF, in loc sa-l deschida in browser 2
Probabil ca multi dintre voi ati observat ca browserele afiseaza documentele PDF sau imaginile direct in cadrul ferestrei, iar pentru a le descarca trebuie ales in mod explicit optiunea de Save din meniul browserului. Acest comportament poate fi schimbat prin headere. Daca serverul web i-ar "spune" browserului ca ceea ce se va transmite trebuie downloadat si nu afisat atunci problema ar fi rezolvata. Sa creem asadar fisierul download.php
`` Document inexistent</head>
<body><h1>Documentul nu a fost gasit.</h1>Parametrul
<tt>file</tt> trebuie sa specifice un fisier existent in
folderul curent. Incercati din nou.</body></html>";
exit; # opresc executia aici
}
# daca s-a ajuns aici, inseamna ca fisierul exista
# preiau extensia pentru a-i determina tipul
$type = strrchr($_GET[ 'file' ], '.' ); # preiau textul de la ultimul punct la final
# $type va fi acum ceva de genul ".pdf"
if( strlen( $type ) ) $type = substr( $type, 1 ); # elimin punctul
# in functie de tip, trimit browserului un anumit Content-Type
# este necesar pentru ca browserul sa stie ce tip de fisier va fi descarcat
switch( $type ) {
case 'pdf':
header( 'Content-type: application/pdf' ); // fisierul va fi PDF
break; # break este necesar dupa fiecare 'ramura' a lui switch
case 'gif':
header( 'Content-type: image/gif' );
break;
case 'jpg':
# daca nu specific un break se va executa instructiunea de la urmatoarea
# ramura 'case'.
# Cu alte cuvinte pentru 'jpg' si 'jpeg' este aceeasi instructiune
case 'jpeg':
header( 'Content-type: image/jpeg' );
break;
case 'html': # fisierul HTML va fi oferit spre download in loc sa fie afisat
header( 'Content-type: text/html' );
break;
case 'php':
header( 'Content-type: application/x-httpd-php' );
break;
/*
aici pot fi definite oricate conditii
*/
default: # daca nici una din conditiile de mai sus nu a fost indeplinita
# afisez ca text
header( 'Content-type: text/plain' );
}
# trimitem headerul prin care "fortam" download-ul. Partea "filename" din textul
# de mai jos specifica numele sub care va fi propus spre download fisierul
header("Content-Disposition: attachment; filename=\"download.$type\"");
# trimit continutul fisierului de descarcat
readfile( '.'.DIRECTORY_SEPARATOR.'resurse'.DIRECTORY_SEPARATOR.$_GET[ 'file' ] );
# alternativ de poate folosi
// echo file_get_contents( './resurse/' . $_GET[ 'file' ] );
# nota: se pot specifica doar fisiere din folderul 'resurse', orice incercare de
# a accessa alte fisiere este blocata
} else {
# nu a fost transmis parametrul file, fac redirect la pagina principala
header( "Location: /" );
}
exit; /* intrerup executia, desi nu e necesar in aceasta situatie fiind deja la
sfarsitul scriptului */
?>
</codePuteti testa "in actiune" acest cod urmand urmatoarele link-uri: <ul class="list">
<li><a href="/public/resurse/install_cl-sr.jpg" rel="nofollow">Imagine jpeg - link direct</a></li>
<li><a href="/public/resurse/download.php?file=install_cl-sr.jpg" rel="nofollow">Imagine jpeg - link de download</a> </li>
<li><a href="/public/resurse/document.pdf" rel="nofollow">Document PDF - link direct</a></li>
<li><a href="/public/resurse/download.php?file=document.pdf" rel="nofollow">Document PDF - link de download</a> </li>
</ul<p class="text-block">
<a href="/public/resurse/download.php?file=download.php" rel="nofollow">Descarcati fisierul PHP si incercati-l pe serverul vostru.</a>
</p<div id="comments_bar_download" class="commentsBar">
<div class="commentsBarTitle">
<img src="//static.punctsivirgula.ro/img/ghid-all/comentariu.png" alt="comentarii" /> <a href="#sc_download" name="lsc_download" id="lsc_download" rel="toggle-sc_download" > 2 comentarii </a <a href="#sc_download" onclick="animatedcollapse.show( 'sc_download' ); return false;" class="addclink"><small style="text-decoration: underline;"> Citeste-le</small></a <a href="#toggle_ac_download" rel="download" class="addclink"><small style="text-decoration: underline;"> Adauga unul </small></a>
</div<div id="sc_download"><div class="commentsHolder"><strong><NAME></strong> a spus <strong>Mersi</strong> - 24.07.2010<br /><p>Multumesc pentru fisier!Nu reuseam deloc sa downloadez fisierele care le bagasem deja in baza(doar calea). M-ai salvat!! </p></div><div class="commentsHolder"><strong>Nicusor</strong> a spus <strong>Nu functioneaza</strong> - 28.05.2013<br /><p>am incercat pe serverul meu local acest cod si nu imi functioneaza. daca intru pe adresa locala si din click pe folderul creat pentru documentul pdf si acest script php, in momenutl in care dau clik pe download.php, iese din folder si nu se intampla nimic :|</p></div> <br /><a href="#ac_download" id="toggle_ac_download" rel="toggle-ac_download" style="text-decoration: underline;">Adauga un comentariu la aceasta sectiune.</a><br /><br />
<div id="ac_download" <form action="" id="fc_download" method="post">
<fieldset><table>
<tr>
<td><label for="nume_download">Nume <span class="required">*</span></label></td>
<td><input id="nume_download" name="nume" title="Numele care va fi afisat pe site" /></td>
<td></td>
</tr><tr>
<td><label for="email_download" title="Folosit exclusiv pentru notificari">Email <small>(va fi ascuns)</small></label></td>
<td><input name="email" id="email_download" /></td>
<td></td>
</tr><tr>
<td><label for="website_download">Website</label></td>
<td><input name="website" id="website_download" /></td>
<td></td>
</tr><tr>
<td><label for="titlu_download">Titlul mesajului<span class="required">*</span></label></td>
<td><input id="titlu_download" name="titlu"/></td>
<td></td>
</tr><tr>
<td><label for="comentariu_download">Comentariu <span class="required">*</span></label></td>
<td><textarea id="comentariu_download" name="comentariu" title="Introduceti comentariul" rows="5" cols="30"></textarea></td>
<td></td>
</tr><tr>
<td colspan="3">
<input type="hidden" value="c" name="tip" />
<input type="hidden" value="download" name="s" />
<input type="hidden" value="http " name="p" />
<input type="submit" class="submit" value="Trimite"/ <img style="display: none" id="fc_l_download" src="//static.punctsivirgula.ro/img/ghid-all/ajax-snake.gif" alt="loading..." />
</td>
</tr>
</table></fieldset>
</form<script type="text/javascript">
animatedcollapse.addDiv('sc_download', 'fade=0,speed=200,hide=1,group=sg_download,persist=1,hilite=bold');
animatedcollapse.addDiv('ac_download', 'fade=0,speed=200,hide=1,persist=1');
</script>
<br /><br /></div<script type="text/javascript"><!--
sections = ['headere', 'redirect', 'download'];
animatedcollapse.addDiv('headere', 'fade=1,speed=400,persist=1,hide=0');
animatedcollapse.addDiv('redirect', 'fade=1,speed=400,persist=1,hide=1');
animatedcollapse.addDiv('download', 'fade=1,speed=400,persist=1,hide=1');
animatedcollapse.init();
--></script>
</div<div id="footer">
<div id="footer-contents" class="page-contents">
<div id="copy_note"><p>Drepturi de autor © 2008-2023 punctsivirgula.ro. <a href="/?legal">Anumite drepturi rezervate.</a></p>
<p class="lastMod">Ultima modificare a paginii: 9 august 2011.</p>
</div>
<div id="disclaimer"></div>
</div>
</div<div class="hidden">
<script type="text/javascript"><!--
$( document ).ready(
function() {
$( '#content_left>ul' ).find( 'li' ).hover( function() { $(this).addClass( 'hover' ); }, function() { $(this).removeClass( 'hover' ); } );
$( '#content_left>div.category' ).click( function() {
$(this).next().slideToggle(400, function() { scrollFollowBox.initialOffsetTop = parseInt($('#menuSeparator').offset().top) + 40; scrollFollowBox.parentTop = scrollFollowBox.initialTop + scrollFollowBox.initialOffsetTop; });
} );
$( '#content_left>ul.categoryMenu' ).each(function(){ $(this).css("height", $(this).height() + "px"); });
$( '#content a.targetb' ).click( function() { window.open($(this).attr('href')); return false; } );
$( '#content div.tryit a' ).click( function() { window.open($(this).attr('href')); return false; } );
$( '#content div.tryit a' ).attr( 'title', 'Testeaza codul-sursa online (se va deschide o fereastra noua)' );
//Restart the scroll position to ( 0, 0 ) (Firefox doesn't reset it)
$('window').attr({scrollTop:0,scrollLeft:0});
// local scroll titles
$('#content h2').localScroll({ duration:450, hash:true,
onBefore:function( e, anchor, $target ){
if( this.getAttribute( 'rel' ) ) { animatedcollapse.show( this.getAttribute( 'rel' ) ); }
if( this.blur ) this.blur();//remove the awful outline
} });
// local scroll - comments
$('#content div.commentsBar').localScroll({ duration:350, hash:true,
onBefore:function( e, anchor, $target ){//'this' is the clicked link
if( (this.getAttribute( 'rel' )) && (this.getAttribute( 'rel' ).indexOf( 'toggle' ) == -1) ) {
animatedcollapse.show( 'ac_' + this.getAttribute( 'rel' ) );
animatedcollapse.show( 'sc_' + this.getAttribute( 'rel' ) );
} else { return false; }
if( this.blur ) this.blur();//remove the awful outline
} });
// validate/submit forms
if( typeof(sections) != 'undefined' ) $.each( sections, function(index, section) {
var v = $("#fc_" + section).validate({
rules: { nume: { required: true, minlength: 3 }, email: { email: true }, website: { url: true }, titlu: { required: true, minlength: 5 }, comentariu: { required: true, minlength: 5 } },
messages: { nume: "Introduceti numele", email: "Introduceti un email valid", website: "Introduceti un URL valid", titlu: "Introduceti un titlu", comentariu: "Introduceti mesajul" },
errorPlacement: function(error, element) {
if ( element.is(":radio") ) error.appendTo( element.parent().next().next() );
else if ( element.is(":checkbox") ) error.appendTo( element.next() );
else error.appendTo( element.parent().next() );
},
success: function(label) { label.html(" ").addClass("success"); },
submitHandler: function(form) {
jQuery(form).ajaxSubmit({
url: "/cmmnts.php",
type: "post",
beforeSubmit: function(formArray, jqForm) { $( '#fc_l_' + section ).show(); },
success: function(response) {
if( response == '1' ) {
v.resetForm();
animatedcollapse.toggle( 'ac_' + section );
$('#toggle_ac_' + section).html("Comentariul a fost trimis. Adauga un alt comentariu.");
} else {
alert('Itemul nu a putut fi adaugat. Va rugam incercati din nou');
}
$( '#fc_l_' + section ).hide();
}
});
}
});
});
initFloatingBoxDone = false;
var initFloatingBox = function() {
if(!initFloatingBoxDone) $( '#floatingBox' ).scrollFollow({ easing: 'easeOutQuint', container: 'container', offset: 50, reference: 'menuSeparator' })
initFloatingBoxDone = true;
};
var ctgToHide = $( '#content_left>ul.categoryMenu.slideUp' );
if( ctgToHide.length > 0 ) {
setTimeout( function() { ctgToHide.slideUp(400, initFloatingBox); }, 500 );
} else {
initFloatingBox();
}
}
);
--></script>
</div<!-- Matomo -->
<script>
var _paq = window._paq = window._paq || [];
/* tracker methods like "setCustomDimension" should be called before "trackPageView" */
_paq.push(['trackPageView']);
_paq.push(['enableLinkTracking']);
(function() {
var u="//stats.punctsivirgula.ro/piwik/";
_paq.push(['setTrackerUrl', u+'matomo.php']);
_paq.push(['setSiteId', '1']);
var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0];
g.async=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s);
})();
</script>
<!-- End Matomo Code --</body</htmlDate: 2010-09-14
Categories:
Tags:
## Ce sunt Cookies?
## Cum setez un cookie? Cum vad cookie-urile existente si cum le sterg? 1
Adaugarea unui cookie pe calculatorul vizitatorului se realizeaza cu ajutorul functiei setcookie. Aceasta va trimite un header pe care browserul client il va interpreta si va crea cookie-ul. Fiind vorba de headere, se aplica regula de a nu printa nimic intainte de apelarea functiei setcookie. In cazul in care o instructiune de afisare a fost apelata inainte, cookie-ul nu va mai putea fi creat.
Pentru a vedea cookie-urile deja create in browser-ul vizitatorului se foloseste variabila predefinita $_COOKIE. Aceasta este un vector asociativ ce contine un element pentru fiecare cookie existent. Popularea acestuia se face in mod automat in functie de informatiile transmise de browser.
Nota: elemente de tip cookie se pot crea si la nivel de browser (folosind JavaScript sau un alt limbaj de client-side scripting). Acestea vor fi disponibile in variabila $_COOKIE la urmatorul request (efectuat, evident, dupa ce cookie-urile au fost create).
if( isset( $_COOKIE[ 'Test2' ] ) ) {
echo 'Test2: ', $_COOKIE[ 'Test2' ], '
```
';
}
// Rezultat (e nevoie de refresh prima data)
//
//
# stergerea unui cookie se face printr-un truc: alegand o data de expirare din trecut
setcookie("Test2", 'o ora', time()-3600); // timpul de expirare a trecut, deci cookie-ul
// nu va mai fi valid si va fi sters de catre browser
?super....ma ajutat
## Cum verific daca browser-ul utilizatorilor accepta cookie-uri? 0
Ideea de baza este urmatoarea: se incearca sa se creeze un cookie, apoi se face un redirect la aceeasi pagina iar in urma redirectului (practic la a doua accesare a paginii) se verifica daca acel cookie a fost creat cu succes.
Redirect-ul se face folosind headere, deci va fi transparent pentru utilizator, care probabil nici nu va observa ca a fost redirectionat. Pentru a stii cand se face verificarea cookie-ului, se foloseste un parametru GET ce va avea valoarea 1 dupa redirect.
Puteti verifica acest exemplu mergand la pagina cookietest.php. Ca idee de testare, dezactivati cookie-urile din browser si accesati pagina. Apoi activati-le si face click din nou pe link (nu dati refresh la pagina deja deschisa intrucat are deja parametrul "creat=1" ceea ce va impiedica scriptul sa creeze cookie-ul din nou).
Descarcati fisierul cookietest.php si incercati-l pe serverul vostru
## Cum creez un tracking cookie (un cookie care sa retina numarul de vizite pe pagina)? 3
Nu inteleg de ce face setcookie dupa if. Imi explicati, va rog? Multumesc
In exemplul de mai sus mai intai se determina numarul de vizite anterioare (care poate fi 0 sau o alta valoare preluata tot din cookie); apoi se creste acel numar cu o unitate ($vizite++), dupa care, prin setcookie, se "salveaza" intr-un cookie de browser numarul actualizat.Practic acel setcookie trimite catre browser o instructiune (un header) care are ca efect stocarea valorii lui $vizite in browser, iar acest lucru se intampla dupa ce este determinata valoarea corecta a variabilei $vizite. Am nevoie sa fac un tutorial despre $_COOKIE.Instructiunile sunt in limba englezaPage format and structure: - Topic ( i.e. JavaScript Function ) - Tutorial content - Description - Where and why to use - At least 3 examples - The content and examples must be original (no copying) - All examples provided must wor
Adauga un comentariu la aceasta sectiune.
Date: 2012-07-09
Categories:
Tags:
## Sesiuni in PHP 1
Sesiunile, reprezinta o functionalitate prin care anumite informatii sunt mentinute de la o pagina la alta. O sesiune dureaza atat timp cat utilizatorul acceseaza un site si se incheie odata cu inchiderea browserului.
### Requesturi stateless si necesitatea sesiunilor
Accesarea unei pagini web este o operatie de sine statatoare (stateless, sau fara stare). Asta inseamna ca orice accesare ulterioara a aceleiasi pagini (spre exemplu un refresh) se va face fara ca serverul "sa stie" de accesarile anterioare. La fel se intampla si atunci cand sunt accesate mai multe pagini diferite una dupa alta.
Spre exemplu, daca un utilizator acceseaza pagina a.php si apoi b.php, codul PHP din a doua pagina (b.php) nu poate sti ce "s-a intamplat" in a.php. Sa luam urmatorul exemplu: in a.php se defineste o variabila ca mai jos
```
$text = 'Mesaj din prima pagina';
```
In b.php exista o bucata de cod care afiseaza variabila $text
`echo $text;`
Datorita naturii stateless a requesturilor pagina b.php nu va afisa nimic, intrucat variabilele si toate operatiunile executate in pagina a.php nu se pastreaza de la o cerinta la alta. In mod implicit nu exista nici o modalitate de a face legatura intre cele doua accesari, si prin urmare informatiile de pe prima pagina "traiesc" doar in cadrul accesarii paginii respective si se pierd cand executia scriptului din a.php se termina.
Acest design poate ridica probleme, in special in aplicatiile complexe, pentru ca informatiile nu persista de la o pagina la alta. Este adevarat ca exista cateva moduri prin care mici cantitati de date pot fi transmise catre o alta pagina - spre exemplu, prin formulare (vezi lectia Formulare in PHP), dar acestea nu sunt viabile pentru aplicatii mari, cum ar fi o aplicatie de email, sau un site e-commerce. Din fericire, PHP ofera posibilitatea de a persista informatiile pe parcursul navigarii, prin intermediul sesiunilor (sessions).
### Sesiuni in PHP
In PHP o sesiune reprezinta perioada de timp in care mai multe scripturi PHP, accesate la momente diferite de timp, pot stoca si folosi informatii comune. O sesiune incepe atunci cand un script apeleaza functia session_start si se termina atunci cand utilizatorul inchide browserul (exista si alte modalitati de a porni o sesiune, dar nu sunt prea uzuale - folosirea comenzii session_start este metoda recomandata).
O sesiune se intinde pe mai multe requesturi (pe parcursul a mai multor accesari ale diferitelor pagini), iar pentru a identifica existenta unei sesiuni PHP poate folosi cookie-uri sau parametrii GET un URL-ul paginii.
kjkjlkjl
## Mecanismul de functionare a sesiunilor in PHP 0
In momentul in care un script apeleaza functia session_start pentru prima data intr-o sesiune de lucru, se transmite un cookie catre browserul clientului (un header de tipul 'Set-Cookie', vezi lectia Cookie-uri). Fiind vorba de un cookie, este necesar ca functia session_start sa fie apelata inaintea oricarei instructiuni ce afiseaza ceva (print, echo, etc) si inaintea oricarui cod HTML. Cookie-ul transmis contine un identificator ce poarta numele de Session ID, pe baza caruia se poate face distinctie intre sesiunea curenta si alte sesiuni ale altor utilizatori ce acceseaza site-ul in acel moment.
In cazul in care browserul utilizatorului nu accepta cookie-uri, identificatorul de sesiune va fi transmis printr-un parametru GET, in forma script.php?PHPSESSID=[session id] (se va face practic un redirect automat la aceeasi pagina avand specificat parametrul in URL). Este apoi responsabilitatea programatorului sa includa manual acest identificator in toate celelalte link-uri de pe pagina, asigurandu-se ca toate paginile vor fi accesate cu acest parametru. Aceste situatii sunt insa rare, iar in exemplele ce urmeaza vom considera ca browserele au mereu cookie-urile activate, astfel ca nu va trebui sa avem grija sa transmitem manual Session ID-ul.
In momentul in care se acceseaza din nou aceeasi pagina, sau o alta din cadrul aceluiasi site, identificatorul de sesiune este transmis de catre browser (ca orice cookie existent in browser). Astfel, orice script PHP are acces la Session ID-ul creat initial, fiind capabil sa acceseze sesiunea corecta. Mai este nevoie de ceva insa: pentru a putea avea acces la informatiile persistate, un script trebuie sa apeleze session_start. De aceasta data, existand deja un Session ID disponibil, PHP va sti ca nu trebuie creata o sesiune noua ci continuata una existenta. Asadar, session_start are doua functionalitati: sa porneasca o sesiune noua (atunci cand nu exista un Session ID) sau sa continue o sesiune existenta, identificata printr-un Session ID.
Functiile principale de manipulare a sesiunii sunt prezentate mai jos. Exista o functie care returneaza Session ID-ul curent: session_id. Aceasta este utila cand este nevoie ca identificatorul sa fie transmis in URL. Alternativ se poate folosi constanta globala SID. De asemenea exista o functie pe permite inchiderea unei sesiuni pe parcursul executiei scriptului curent. Inchiderea sesiunii inseamna oprirea posibilitatii de a scrie/citi date, nu stergerea datelor salvate deja. Datele raman salvate si pot fi accesate din nou dupa apelarea functiei session_start.
### Accesarea datelor
Din momentul in care scriptul PHP apeleaza session_start, acesta poate incepe deja sa stocheze date ce vor fi persistate. In limbajul programatorilor de PHP se foloseste expresia "sa pastreze date in sesiune" sau "pe sesiune". Aceste date sunt gestionate de catre limbajul PHP (salvate, preluate, etc) si nu este esential pentru programator sa cunoasca mecanismul intern de manipulare a acestora. Poate deveni important, totusi, atunci cand se doreste definirea unui mecanism particularizat de gestionare a sesiunii, dar aceasta reprezinta o facilitate avansata a limbajului PHP care depaseste nivelul acestor lectii.
Salvarea datelor pe sesiune se face prin intermediul vectorului super-global $_SESSION. Exista si o functie ce permite inregistrarea datelor pe sesiune (nu si modificarea lor), dar folosirea acesteia nu este recomandata. Functia se numeste session_register si a fost marcata ca invechita in versiunea 5.3 a limbajului PHP. Citirea datelor persistate se poate realiza tot prin intermediul $_SESSION.
Scrierea si citirea datelor de sesiune se realizeaza practic la fel ca manipularea unui vector asociativ, ceea ce face aceasta functionalitate usor de folosit.
## Aplicatie: sa se transmita o informatie de la o pagina la alta folosind sesiunile PHP 4
Urmand exemplul de la inceputul paginii, sa construim cele doua fisiere a.php si b.php pentru a putea pastra textul de la o accesare la alta.In primul fisier stochez date pe sesiune, urmand ca acestea sa fie citite de pe sesiune in al doilea fisier.
```
Mergeti la pagina urmatoare';
Dupa accesarea paginii a.php ar trebui sa existe deja o sesiune creata ce are stocata variabila text. Scriptul din b.php va accesa aceasta sesiune, tot prin session_start si va citi datele inregistrate anterior.
f util si la obiect !
Ai uitat numele siteului. Trebuie pus dupa "session_start()" in b.php
Ar trebui de adaugat in b.php un (;) de incheiere a instructiunei session_start() astfel b.php va arata:<?php // b.php session_start(); // continuam o sesiune existenta echo $_SESSION[ 'text' ]; // afisam informatia ?Corect, asa este. Am corectat exemplul. Multumesc!
## Fisiere in PHP 0
PHP, ca orice alt limbaj modern de programare, ofera facilitati avansate in lucrul cu fisierele. Astfel, folosind cod PHP se pot crea, modifica, manipula si sterge fisiere.
### Fisiere text si fisiere binare
Fisierele sunt colectii de informatii stocate pe un dispozitiv de stocare (hard-disk, CD, etc). In functie de formatul datelor continute fisierele se impart in doua categorii: fisiere text si fisiere binare.
Fisierele text sunt cele care contin text simplu ce poate fi citit, reprezentat si modificat de catre oricine si de catre orice secventa de cod. Spre exemplu, un fisier cu extensia .txt, creat cu un utilitar cum este Notepad, este cel mai definitoriu tip de fisier text. Un alt exemplu este un cod-sursa PHP salvat intr-un fisier. Desi codul sursa PHP are o anumita structura si poate fi interpretat intr-un anumit mod, el este la baza un fisier text, ce poate fi citit de oricine (orice om). De asemenea continutul sau poate fi modificat oricand, fie de un om, fie de un program.
Fisierele binare sunt acele fisiere ce contin secvente de text intr-un anumit format si cu o structura specifica inteleasa doar de un calculator (de o anumita secventa de cod). Un exemplu de fisier binar este o imagine (fisier cu extensia .jpg) sau o melodie (fisier cu extensia .mp3). Desi acestea pot fi vizualizate cu un editor de texte, ca Notepad, continutul lor nu poate fi modificat de oricine (cel putin nu fara a corupe fisierul). Fisierele binare sunt utile pentru ca pot contine orice tip de date, cu orice structura se doreste. Avantajul este ca prin intermediul fisierelor binare si pot crea formate proprietare, care permit modificarea continutului fisierelor doar de anumite programe. Spre exemplu, fisiere PDF pot fi create/vizualizate doar cu anumite aplicatii, la fel si documentele Word, s.a.
PHP ofera suport pentru ambele tipuri de fisiere: text si binare. Atat timp cat se cunoaste structura fisierelor binare (ce tipuri de informatii contin) acestea pot fi modificate din PHP. Cu toate acestea, in cele ce urmeaza ne vom concentra asupra fisierelor text intrucat sunt cel mai des intalnite si mai usor de folosit.
### Folosirea fisierelor in PHP
In orice limbaj de programare, si deci si in PHP, cand se lucreaza cu fisiere trebuie efectuate urmatoarele operatiuni:
* deschiderea fisierului
* citirea din fisier/scrierea in fisier
* inchiderea fisierului
Necesitatea fiecarei operatiuni este probabil usor de inteles. Un fisier trebuie deschis pentru a putea vedea ce este in el - chiar si pentru o secventa de cod aceasta este o necesitate. Scrierea/citirea propriu-zisa reprezinta operatia pentru care fisierul a fost deschis, prin care sunt preluate sau adaugate informatii in fisier. La finalul folosirii fisierului acesta trebuie sa fie inchis, altfel este posibil ca informatiile continute sa se piarda.
Instructiunile PHP corespunzatoare operatiilor de mai sus sunt prezentate in cele ce urmeaza. De remarcat este faptul ca trebuie specificat inca de la deschiderea fisierului ce fel de operatii vor avea loc: citire, scriere sau ambele.
Asa cum s-a observat, un mod de deschidere pentru citire ('r') si 2 moduri de deschidere pentru scriere: cu stergere a continutului vechi (modul 'w') si cu pastrare a continutului (modul 'a'), caz in care scrierea se face la sfarsitul fisierului, imediat dupa continutul vechi. Depinde de specificul fiecarei situatii ce mod de scriere trebuie ales.
Limbajul PHP dispune si de alte moduri de deschidere a fisierului, care sunt mici variatii ale celor prezentate mai sus. Intrucat sunt folosite doar in anumite situatii nu vor fi prezentate aici. Pentru detalii puteti consulta pagina de documentatie a functiei fopen.
Instructiunile de mai sus pot fi folosite cu orice tip de fisier, text sau binar si reprezinta baza in lucrul cu fisierele. Exista o suita de alte functii pentru citirea sau scrierea anumitor tipuri de date specifice, ce pot fi folosite cu ambele tipuri de fisiere. In practica insa, de cele mai multe ori se lucreaza cu instructiuni "utilitare" de manipulare a fisierelor text - instructiuni care simplifica efortul programatorului in special in situatii generale. Aceste functii sunt descrise in cele ce urmeaza.
### Instructiuni pentru fisiere text
Pentru simplificarea codului de citire/scriere a unui fisier in situatii generale, PHP ofera cateva functii foarte convenabile: file_get_contents, file_put_contents si file. Primele doua, dupa cum si numele spune, permit preluarea intregului continut al unui fisier si punerea lui intr-o variabila string, printr-un singur apel, respectiv crearea unui fisier care sa contina valoarea unei variabile (exemple mai jos). Cea de-a treia functie permite crearea unui vector ce are ca elemente liniile fisierului specificat. De exemplu, un fisier cu 3 linii va genera un vector cu 3 elemente, fiecare element reprezentant o linie din fisier.
Este important de mentinut ca odata citit continutul fisierului (cu file_get_contents si file) variabilele se "deconecteaza" de fisier, adica orice modificare facuta asupra variabilei nu se reflecta si asupra fisierului, cele 2 entitati fiind separate.
## Manipularea fisierelor in PHP 0
Pe langa functiile pentru preluarea/modificarea continutului fisierelor, PHP ofera si facilitati de gestionare a fisierelor: creare, mutare, copiere, modificare atribute, etc. Cateva din acestea sunt prezentate mai jos.
## Directoare in PHP 0
Manipularea directoarelor (folderelor) folosind PHP se face la fel de usor ca in cazul fisierelor. Majoritatea functiilor folosite pentru fisiere se pot aplica si la foldere (de exemplu copy, rename, is_file, etc), dar exista o serie de alte instructiuni specifice dosarelor.
Limbajul PHP dispune de o serie de functii ce permit citire continutului unui folder intr-un mod similar cu preluarea continutului unui fisier. Astfel, exista functii pentru deschiderea unui director (opendir), citirea continutului, adica a fisierelor sau folderelor existente in acel director (readdir) si inchiderea lui (closedir). O situatie in care aceste functii pot fi folosite este aceea cand se doreste afisarea unei liste a elementelor continute intr-un folder si se doreste efectuarea unor calcule sau procesari pe baza fiecaruia dintre aceste elemente.
```
";
}
}
closedir($handle); // inchei citirea din folder
}
?>
```
## Istoricul cautarilor pe un site PHP 1
In cele ce urmeaza va fi realizata o aplicatie simpla care pastreaza istoricul cautarilor efectuate de utilizatori. Ideea poate fi extinsa la alte aplicatii similare, de exemplu pentru a pastra istoricul ultimelor pagini vizitate, al comentariilor recente efectuate, etc. In toate aceste situatii mecanismul de functionare este acelasi, difera doar ce anume este persistat.
Aplicatia este una simpla si consta intr-un singur fisier PHP. Are la baza formularele, intrucat necesita transmiterea unor date de la utilizatori catre serverul web. Pentru mai multe informatii despre formulare, accesati lectia Formulare in PHP.
Pentru a persista datele pe parcursul navigarii (de la o afisare a paginii la alta) aceasta aplicatie foloseste mecanismul de sesiuni.
In PHP, variabilele sunt ne-persistente (stateless). Asta inseamna ca ele exista si au o valoare doar cat timp se proceseaza scriptul in care sunt definite. La terminarea executiei, valorile se pierd.
Astfel, la o noua procesare a aceluiasi script (spre exemplu la un refresh al paginii, cand codul-sursa se executa din nou) toate variabilele se reinitializeaza si nu "tin minte" valorile de dinainte. Asta inseamna ca o navigare printr-un site cu mai multe pagini PHP consta, de fapt, intr-o serie de executii independente ale scripturilor si, in mod normal, o pagina "nu stie" ce variabile s-au definit la executiile anterioare sau pe alte pagini accesate.
Acesta este comportamentul implicit. Exista totusi si un mecanism de a persista date de-a lungul navigarii de la o pagina la alta, iar acest mecanism poarta numele de sesiune. Pentru mai multe informatii despre sesiuni, accesati lectia Sesiuni in PHP.
In aplicatia noastra, la fiecare accesare a paginii se porneste mecanismul de sesiuni PHP. Atunci cand un utilizator face submit la formularul de cautare, datele transmise vor fi stocate in sesiune (in vectorul $_SESSION care este administrat in mod automat de catre interpretorul PHP). Astfel, la o accesare a paginii, vectorul $_SESSION va contine toate datele adaugate anterior.
In continuare sunt incluse partile mai importante ale aplicatiei.
### Formularul de cautare
### Codul PHP de persistare a unei cautari
```
# pornesc o sesiune
session_start();
// ...
# preiau datele existente din sesiune
if( isset( $_SESSION['cautari'] ) ) {
$istoric = $_SESSION['cautari']; // istoric exista
} else {
$istoric = array(); // initializez un vector gol
}
# verific daca a fost facut submit la formular
if( isset($_POST) && !empty($_POST['keyword']) ) {
// adaug termenul cautat
array_push($istoric, $_POST['keyword']);
// stochez totul inapoi pe sesiune
$_SESSION['cautari'] = $istoric;
// e posibil sa fac alte procesari aici
// de exemplu sa afisez lista de rezultate
}
```
### Codul PHP de afisare a istoricului cautarilor
```
# verific daca exista istoric salvat
if( empty( $istoric ) ) {
echo 'Nu ati cautat nimic pana acum';
# cum exista deja un istoric, il afisez
echo '
```
* $termen
"; } # afisez cate elemente exista echo '
', 'Cautari efectuate: ', count( $istoric ), '
';
}
Aplicatia este disponibila pentru testare aici (click pentru a accesa). De asemenea, puteti descarca fisierul PHP si sa il incercati pe serverul vostru local: istoric.php
As vrea sa imi dai adresa ta de email sau de facebook deoarece as vrea sa ma ajuti la ceva, sunt incepator si imi e greu sa ma acomodez, dar am niste intrebari si nu gasesc raspuna.. vreau sa fac o "pagina web de cam 60 de file"si as vrea sa ma ajuti putin la primele 2 pagini deoarece nu stiu cum sa le fac (am niste idei despre cum ar arata dar nu stiu sa le pun in aplicare)
Adauga un comentariu la aceasta sectiune.
Date: 2013-05-07
Categories:
Tags:
## Incarcarea (copierea) unui fisier de pe calculatorul personal pe server (upload de fisiere) 1
Incarcarea unui fisier pe server se face prin intermediul formularelor. Pentru aceasta facilitate se foloseste un formular putin diferit: are un atribut special (enc-type) si metoda POST. De asemenea, trebuie sa contina un element input de tip file pentru fiecare fisier incarcat (spre exemplu, daca se incarca 2 fisiere, va fi nevoie de 2 elemente input).
Pentru ca functionalitatea de upload sa fie functioneze, trebuie indeplinite urmatoarele conditii:
* directiva file_uploads din fisierul de configurari al PHP (php.ini) trebuie sa fie 'on'
* directiva upload_tmp_dir din php.ini trebuie sa se refere la o cale existenta pe server si cu permisiuni suficiente pentru ca web-serverul sa poata crea fisiere
* directivele upload_max_filesize si post_max_size din php.ini specifica marimea maxima a fisierului si respectiv a datelor ce pot fi transmise prin intermediul formularului; este recomandat ca aceste valori sa fie revizuite
* atributul enctype="multipart/form-data" NU TREBUIE omis, altfel uploadul nu va functiona
Nota: de obicei, interpretorul PHP este configurat deja pentru upload de fisiere, deci primele 3 conditii de mai sus sunt aproape intotdeauna indeplinite fara a fi nevoie de alte configurari. Acesta este si cazul pachetului EasyPHP la care incarcarea de fisiere este activata si configurata implicit.
Un exemplu de formular pentru upload de fisiere este mai jos:
Nota asupra mecanismului de upload:
* atunci cand se face submit la formular, browser-ul web transmite catre server continutul fisierului
* serverul web copiaza fisierul transmis intr-o locatie temporara (specificata de directiva upload_tmp_dir)
* interpretorul PHP este invocat, avand variabila $_FILES populata cu informatii despre upload
* programatorul este responsabil cu prelucrarea fisierului incarcat pe server (mutare intr-o alta locatie, citire, copiere, etc); prelucrarea ce se face cu ajutorul functiilor puse la dispozitie de PHP. Daca fisierul incarcat nu este mutat (sau redenumit) din locatia temporara, acesta va fi sters automat la terminarea executiei scriptului (la finalul requestului, mai exact).
Fisierul PHP (upload.php) ce va prelucra uploadul contiune urmatoarea secventa de cod (include si validare).
```
$mul*(int)$POST_MAX_SIZE &&
$POST_MAX_SIZE ) {
print "Fisier prea mare! Ati depasit limita maxima permisa";
}
} else {
print "Eroare nespecificata (probabil fisierul este prea mare)";
}
} else {
# nu s-a facut inca submit la fisier, afisez un mesaj
print "Apasati pe 'Trimite fisier' pentru a face upload!";
}
} else {
# $_POST si $_FILES sunt setate; verific alte erori ce pot sa apara
if( $_FILES['fisier']['error'] > 0 ) {
print "A intervenit o eroare (#{$_FILES['fisier']['error']})";
} else {
# fisierul uploadat va fi pus in subfolderul 'upload' (care trebuie sa
# existe deja in aceeasi locatie ca si fisierul upload.php
$uploaddir = dirname( __FILE__ ). DIRECTORY_SEPARATOR .
'upload' . DIRECTORY_SEPARATOR;
$uploadfile = $uploaddir . basename($_FILES['fisier']['name']);
if (move_uploaded_file($_FILES['fisier']['tmp_name'], $uploadfile)) {
print "Fisier incarcat cu succes!";
} else {
print "Nu s-a putut incarca fisierul";
}
}
}
?>
```
Buna ziua am inceput sa ma interesez de php si am incercat codul de mai sus pe site-ul meu l-am pus dar dupa ce pun o poza la upload se incarca pagina mea si link-ul este http://seraphisro.hol.es/upload.php si scrie 404 Not Found Ce trebuie sa fac sa imi mearga ?
## Incarcarea mai multor fisiere pe server in acelasi timp (upload de fisiere multiple) 2
Cand este nevoie sa se incarce mai multe fisiere in acelasi timp, formularul va contine mai multe elemente de tip INPUT FILE, denumite sub forma unui vector (array):
Codul PHP pentru prelucrarea uploadului trebuie scris in acelasi fisier ca si formularul (intrucat atributul action nu specifica in mod explicit un fisier PHP).
Nota: acesta este o varianta cu validari minimale.
```
$error) {
if ($error > 0) { # echivalent cu ( $_FILES["pictures"]["error"][$key] > 0 )
print "Eroare cu fisierul {$_FILES["pictures"]["tmp_name"][$key]}!";
} else {
$tmp_name = $_FILES["pictures"]["tmp_name"][$key];
$name = $_FILES["pictures"]["name"][$key];
# mut fisierul din locatia temporara in directorul curent (acelasi
# director in care se afla scriptul PHP)
move_uploaded_file($tmp_name, "$name");
}
}
?>
```
Si daca incarc un fisier php care are extensia modificata in jpg ce faci?
Atat timp cat fisierul incarcat pe server nu este "apelat" din PHP folosind functiile "include" sau "require" nu este nici o problema.Functia "move_uploaded_file" nu introduce nici o vulnerabilitate, ea doar muta fisierul in locatia specificata. Este responsabilitatea programatorului ca fisierul incarcat sa fie folosit in asa fel incat sa nu permita atacuri de tipul "Local File Inclusion"
Adauga un comentariu la aceasta sectiune.
Date: 2012-12-11
Categories:
Tags:
## Site de administrare anunturi 6
In cele ce urmeaza va fi explicat mecanismul de functionare a unei aplicatii in care se pot adauga si vizualiza anunturi. Aceasta aplicatie are la baza formularele, intrucat necesita transmiterea unor date de la utilizatori catre serverul web. Pentru mai multe informatii despre formulare, accesati lectia Formulare in PHP.
Pentru a face codul usor de inteles, sunt incluse doar operatiile de baza, fara validari sau prelucrari aditionale.
In acest exemplu, vom folosi si formularul de login descris in lectia Exemple de formulare. Astfel, in urma unei autentificari reusite, utilizatorul va fi redirectionat catre pagina principala de anunturi, numita anunturi.php. Aceasta cuprinde 2 sectiuni: "anunturi existente" si "adaugare anunt".
### Sectiunea "Anunturi existente"
In acesta aplicatie simpla, citirea anunturilor existente se face prin intermediul unei functii (citireDate) definite intr-un fisier separat. Pentru acest exemplu, am ales ca datele sa fie stocate temporar in memorie, urmand ca mai tarziu sa vedem cum le putem stoca in fisiere sau intr-o baza de date. Pentru scopul acestui exemplu nu este relevant modul de salvare/citire a datelor, deci nu se va insista asupra acestui aspect.
```
Nu exista anunturi';
# cum exista deja anunturi, le afisez
echo '';
# folosesc o structura repetitiva care parcurge vectorul si afiseaza datele lui
foreach($anunturi as $anunt) {
echo '';
echo '';
echo '';
echo '';
echo '';
echo '';
echo '';
}
# afisez cate anunturi exista
echo '
```
', $anunt[ 'nume' ], '', $anunt[ 'telefon' ], '', $anunt[ 'email' ], '', $anunt[ 'oras' ], '', $anunt[ 'continut' ], '
', 'Anunturi salvate: ', count( $anunturi ), '
';
}
?> Asa cum se poate observa, se verifica mai intai daca exista date ( count( $anunturi ) == 0 verifica numarul de elemente din vector). Daca nu exista anunturi, se afiseaza un mesaj specific.In caz contrar, atunci cand exista anunturi salvate, acestea sunt afisate cu ajutorul structurii repetitive foreach. Pentru detalii consultati sectiunea de structuri repetitive a site-ului. In cadrul structurii, este afisat cate un rand de tabel ce contine datele anuntului. La final, sub tabel este afisat un mesaj cu numarul total de inregistrari gasite.
### Sectiunea "Adaugare anunt"
Sub lista de anunturi, pe aceeasi pagina este afisat si formularul pentru adaugarea de noi inregistrari. Acesta este de tip POST, care trimite datele catre fisierul anunturi-post.php. Prin apasarea pe butonul "Salveaza anuntul", datele introduse vor fi transmise catre fisierul PHP de pe server si vor fi disponibile prin intermediul variabilei $_POST.
### Fisierul anunturi-post.php
Fisierul PHP care proceseaza datele este inclus mai jos. Ca si in exemplul de login, se fac verificari pentru a fi siguri ca a fost facut un submit. Daca datele sunt disponibile in variabila $_POST atunci acestea sunt salvate cu ajutorul functiei scriereDate. Functia de salvare a datelor va returna intotdeauna true (in aceasta mica aplicatie), deci va fi mereu afisat mesajul de succes.
```
Date salvate cu succes!';
else $mesaj = 'Datele nu au putut fi salvate';
# nu a fost facut post, poate cineva a intrat direct din browser pe aceasta pagina?
$mesaj = 'Nimic de afisat aici.';
}
?>
```
### Testarea aplicatiei
Aplicatia este disponibila pentru testare aici (click pentru a accesa). Datele de login sunt aceleasi (admin / ghiceste-Ma).
Puteti descarca fisierele PHP si sa le incercati pe serverul vostru local.
* index.php (pagina de login)
* anunturi.php (fisierul principal)
* anunturi-post.php (fisierul de prelucrare a datelor din form)
* acces-date.php (fisierul de citire/salvare a datelor)
Atentie! codul foloseste un mod simplist de verificare a autentificarii, util doar pentru intelegerea codului. Nu folositi acest tip de verificare intr-o aplicatie reala. Un exemplu de autentificare corect gasiti pe pagina Formular de login in PHP. Integrarea acestuia cu aplicatia de anunturi ramane ca exercitiu pentru cititori.
Ai explicat foarte bine tot, la cat mai multe tutoriale de genul asta :)
unde se salveaza mesajele , adica doresc sa le mai curat din cand in cand dar nu gasesc unde le pot sterge pe cele adaugate?
unde se salveaza mesajele , adica doresc sa le mai curat din cand in cand dar nu gasesc unde le pot sterge pe cele adaugate?
In exemplul de pe site anunturile se salveaza pe sesiune si se pastreaza atata timp cat tii browserul deschis. Cand inchizi browserul se pierd. Mai multe detalii despre sesiuni aici: http://php.punctsivirgula.ro/sesiuni/In exemplul de pe site vezi fisierul anunturi-acces-date-sesiune.php. Daca ai nevoie de un alt mecanism de salvare (in fisiere sau baze de date) este suficient sa modifici acel fisier.
Bravo...!Super acest tutorial,e explicat forte bine..tinetio tot asa...
Buna, cum as putea sa modific codul acesta si sa fac ca datele sa intre intr-o baza de date? multumesc!
## Formular complet de login in PHP 0
In lectia Exemple de formulare a fost prezentat un formular de login simplificat, ce nu poate fi folosit intr-o aplicatie web reala. Scopul sau a fost de a prezenta pasii executati de interpretorul PHP in cadrul procesului de login. In cele ce urmeaza va fi prezentat un formular complet de login, cu toate operatiile necesare pentru a asigura o securitate sporita.
Acest exemplu este unul complex (nivelul de complexitate este mediu-avansat) si reuneste mai multe notiuni prezentate pe site. Daca nu stapaniti bine urmatoarele lectii, este indicat sa le recititi pentru a va asigura ca intelegeti codul prezentat.
Procedeul de login este acelasi cu cel prezentat in lectia Exemple de formulare; difera doar operatiile adiacente efectuate dupa login si modul de marcare a utilizatorului ca autentificat.
Astfel, avem un formular simplu ce permite utilizatorilor transmiterea datelor de autentificare si avem o secventa de cod PHP care verifica datele trimise. Lucrurile se complica atunci cand marcam utilizatorul ca autentificat si cand verificam accesul, intrucat acest formular permite si pastrarea utilizatorului logat pentru mai multe zile. In plus, se incearca si respectarea unor norme minime de securitate.
Codul este impartit in mai multe fisiere, prezentate mai jos. Pentru moment, numele de utilizator si parola sunt aceleasi ca in exemplele precedente. Intrucat scopul acestui exemplu este de a prezenta mecanismul de functionare a unui formular de login, nu se va insista asupra modului efectiv de verificare a numelui si a parolei. De aceea, si in acest caz numele de utilizator si parola sunt verificate prin comparatie directa cu valori specificate direct in cod. Ramane ca exercitiu pentru cititori implementarea unei metode de verificare a datelor prin consultarea unei baze de date.
### Formularul de login (login.php)
Asa cum am mentionat, mecanismul general de functionare este acelasi ca la orice formular simplu:
* daca datele au fost trimise (daca exista $_POST['...']), se continua cu executarea codului PHP (din primul if), altfel se afiseaza codul HTML (formularul) de la sfarsitul fisierului;
* se fac validari (in cazul acesta concret trebuie doar ca numele si parola sa fie nenule); daca sunt probleme cu datele trimise se seteaza un mesaj de eroare si se afiseaza codul HTML;
* altfel, daca nu sunt probleme cu datele, se cauta un utilizator cu acel nume si acea parola (checkUserPass);
* daca utilizatorul nu a putut fi gasit, se afiseaza codul HTML cu un mesaj de eroare; in caz contrar se marcheaza utilizatorul ca fiind autentificat (markLoggedIn).
Pe langa acest flux se mai fac alte cateva verificari specifice unei aplicatii de login: se verifica daca utilizatorul este deja autentificat, se verifica daca utilizatorul a solicitat sa fie deconectat (logout), si altele. Intrucat acestea nu prezinta interes, nu vor fi discutate, dar le puteti vedea, alaturi de comentariile aferente, in fisierele sursa ale exemplului (vedeti mai jos).
```
', $error, '';
}
?>
```
Portiunea de cod cea mai importanta din acest proces este apelul functiei markLoggedIn care se executa dupa ce utilizatorul este validat.
### Functia markLoggedIn (functii.php)
Aceasta functie este esentiala intrucat ea salveaza pe sesiune informatii despre utilizator: user-ul, IP-ul, timpul la care s-a logat, ultima accesare a unei pagini, etc. Folosind mecanismul sesiunilor oferit de PHP, datele raman persistente si disponibile altor pagini, astfel ca se poate verifica la orice accesare daca utilizatorul este autentificat sau nu.
Mai mult decat atat, este posibila si salvarea unui cookie care sa permita autentificarea automata chiar si dupa incheierea unei sesiuni PHP (dupa inchiderea browser-ului). Cookie-ul este salvat doar daca utilizatorul solicita acest lucru.
De mentionat este ca se poate spune in orice moment cum s-a logat un utilizator, in functie de valoarea elementului 'via' ($_SESSION['LOGIN']['via']). Initial acesta are valoarea 'form', intrucat prima logare va fi prin intermediul formularului, dar la autentificarea automata prin cookie acesta va avea o alta valoare. De ce este util acest lucru? In functie de specificul aplicatiei, se pot implementa masuri de siguranta aditionale, cum ar fi solicitarea utilizatorilor logati prin cookie sa introduca parola atunci cand fac operatiuni senzitive.
Corpul functiei este prezentat mai jos.
```
# definesc functia care marcheaza utilizatorul ca autentificat
function markLoggedIn() {
$username = $_POST['user']; // user-ul din formular
$keep = $_POST['keep']; // checkbox-ul din formular
$ip = $_SERVER[ 'REMOTE_ADDR' ]; // ip-ul vizitatorului
$via = 'form'; // s-a logat prin formular, nu prin cookie
// creez o structura de date
$data = array();
$data[ 'loggedIn' ] = true;
$data[ 'username' ] = $username;
$data[ 'loginDate' ] = time();
$data[ 'lastAccess' ] = time();
$data[ 'keepLoggedIn' ] = $keep;
$data[ 'ip' ] = $ip;
$data[ 'via' ] = $via;
// pastrez in sesiune
$_SESSION[ 'LOGIN' ] = $data;
// daca trebuie sa tin minte loginul, creez un cookie
if( $keep == 1 ) {
/* setez un cookie ce contine structura creata mai sus si care
* va expira in 30 de zile; structura de date este serializata
* adica transformata intr-un format ce poate fi stocat ca text
* dupa serializare, textul returnat este encodat cu algoritmul
* base64 la care se adauga caracterul '1' pentru a ingreuna
* decodificarea continutului */
setcookie( 'logindata', '1' . base64_encode( serialize( $data ) ),
time() + 2592000, '/' );
} else {
// sterg cookie-ul prin setarea valabilitatii la o data din trecut
setcookie( 'logindata', "", time() - 36000, '/' );
}
// acum ca am salvat datele pe sesiune (si posibil in cookies), redirectionez
header('Location: index.php');
Se observa ca, odata ce datele au fost puse pe sesiune, se face o redirectionare catre o alta pagina. Acest lucru este lesne de inteles - logarea tocmai s-a realizat cu succes, deci nu are nici un rost sa mai fie afisat formularul de login. Redirectionarea se face prin transmiterea unui header si oprirea executiei scriptului PHP.
### Pagina protejata (index.php)
Fisierul index.php, cel care este afisat imediat dupa login, ca de altfel orice alta pagina ce poate fi accesata doar dupa autentificare, trebuie sa verifice starea utilizatorului. Astfel, o pagina protejata va verifica daca utilizatorul este autentificat, inainte de a afisa orice informatie. In exemplul nostru, verificarea se face apeland functia checkLogin, ca mai jos.
### Functia checkLogin (functii.php)
Aceasta functie verifica daca utilizatorul care acceseaza pagina este deja autentificat sau se poate loga automat prin intermediul cookie-urilor. Verificarea se face cu ajutorul unei functii utilitare, getAuthCode, iar in cazul in care rezultatul este negativ se va redirectiona catre pagina de login. In acest mod, utilizatorii neautentificati nu vor putea accesa o pagina protejata.
Functia getAuthCode este cea care face verificarea propriu-zisa. Aceasta cauta mai intai date pe sesiune pentru a vedea daca utilizatorul este deja logat, apoi verifica daca exista un cookie valid ce poate permite autentificarea. Codul complet este prezentat mai jos.
```
# definesc o functie ajutatoare care face redirect la pagina de login
function checkLogin() {
// daca nu e logat il redirectionez la pagina de login
if(getAuthCode() != 0) {
// nu este autentificat, fac redirect specificand codul erorii
header('Location: login.php?error=' . ERR_MUST_LOGIN);
# definesc functia care verifica daca utilizatorul este autentificat
function getAuthCode() {
$error = 0;
// mai intai verific daca in sesiunea curenta utilizatorul e logat
if( isset($_SESSION[ 'LOGIN' ]) ) {
$data = $_SESSION[ 'LOGIN' ];
// exista date pe sesiune, le verific
if( empty($data) || empty($data['loggedIn']) ) {
// datele sunt goale
$error = ERR_MUST_LOGIN;
} else {
// daca nu este marcat ca 'pastreaza-ma logat' verific timpul
// ultimului acces si il deloghez daca a fost inactiv prea mult
$elapsed = time() - $data['lastAccess'];
if( empty($data['keepLoggedIn']) && $elapsed > LOGIN_TIMEOUT ) {
// il deloghez cu un mesaj de eroare
markLoggedOut(ERR_TIMEOUT);
}
}
}
} else if( isset($_COOKIE['logindata']) ) {
// nu exista date pe sesiune, dar exista un cookie
// din textul cookie-ului elimin primul caracter de control
// apoi decodific folosind algoritmul base64
$formaSerializata = base64_decode(substr($_COOKIE['logindata'], 1));
if( $formaSerializata === false ) {
// continutul cookie-ului e invalid
$error = ERR_MUST_LOGIN;
} else {
// reconstruiesc structura de date plecand de la forma serializata
$data = unserialize( $formaSerializata );
if( empty($data) || !is_array($data)) {
// datele sunt invalide
$error = ERR_MUST_LOGIN;
// marchez ca este logat prin cookie
$data['via'] = 'cookie';
// pun datele pe sesiune
$_SESSION[ 'LOGIN' ] = $data;
}
}
} else {
// nu sunt logat, marchez sesiunea ca fiind not-logged-in
$_SESSION[ 'LOGIN' ] = array('loggedIn' => false);
$error = ERR_MUST_LOGIN;
}
// daca este logat, actualizez data ultimei accesari
if( $error == 0 ) {
$_SESSION[ 'LOGIN' ]['lastAccess'] = time();
}
return $error;
}
```
Este foarte important ca toate paginile protejate sa apeleze functia checkLogin (si, implicit, getAuthCode), atat penru a proteja continutul de utilizatori neautentificati, cat si pentru a actualiza componenta 'lastAccess' salvata pe sesiune. Atunci cand un utilizator logat nu solicita sa ramana autentificat (deci nu se salveaza cookie-ul), la fiecare noua accesare a unei pagini, se verifica daca intervalul de timp dintre ultimele doua accesari a depasit o limita stabilita, caz in care este deconectat pentru inactivitate. Trebuie, asadar, ca ora ultimei accesari sa fie actualizata de-a lungul navigarii utilizatorului, iar acest lucru se face in functia getAuthCode.
Functia markLoggedOut este care care face deconectarea; concret, sterge toate datele de pe sesiune si cookie-ul de login (continutul functiei se gaseste in fisierul functii.php disponibil pentru download mai jos).
### Testarea formularului de login
Aplicatia este disponibila pentru testare aici (click pentru a accesa). Datele corecte de login sunt aceleasi ca la celelalte exemple de pe site: admin / ghiceste-Ma.
Daca doriti sa incercati fisierele pe serverul vostru local, le puteti descarca de mai jos.
* index.php (pagina de pornire)
* index-home.php (pagina personala)
* login.php (pagina de login)
* config.php (fisier de configurari)
* functii.php (fisierul ce contine diverse functii ajutatoare)
## Editoare PHP 0
Intrucat sintaxa limbajului PHP este relativ simpla, iar structura codului sursa nu este una impusa, codul PHP poate fi scris cu orice editor text, cum ar fi Notepad. In realitate, acest lucru este posibil doar pentru scripturile de cateva linii de cod; pentru fisiere mari si complexe, un editor PHP specializat este aproape indispensabil.
Exista mai multe editoare de cod-sursa ce pot fi folosite pentru PHP, de la IDE-uri complexe pana la editoare simple. Toate ofera highlight pentru cuvintele cheie, completare automata a instructiunilor, facilitati avansate de cautare si altele. Puteti folosi pe oricare dintre ele, in functie de nevoile voastre.
### Eclipse pentru PHP
Eclipse este o platforma integrata de dezvoltare (IDE) extensibila, ce dispune de o multitudine de functionalitati. Folosita initial pentru dezvoltarea in limbajul Java, a fost ulterior extinsa si la alte limbaje de programare.
Eclipse PDT (Eclipse PHP Development Tools) reprezinta la ora actuala cel mai avansat editor PHP. Este gratuit si poate rula pe orice sistem de operare. Mai multe informatii pot fi gasite pe site-ul proiectului Eclipse PDT.
# Instructiuni de instalare Eclipse PDT
* Instalati Java pe calculatorul personal, daca nu il aveti deja. Platforma Java este necesara pentru a rula Eclipse.
* Descarcati pachetul Eclipse pentru PHP de la pagina http://www.zend.com/en/community/pdt/downloads si dezarhivati-l in locatia dorita.
* Rulati executabilul Eclipse.
* La pornire alegeti o locatie unde vor fi salvate toate fisierele PHP. Este indicat ca aceasta locatie sa fie in Document Root-ul serverului web (de exemplu C:\Program Files\EasyPHP-xx\www).
* Dupa pornirea editorului Eclipse, alegeti din meniul Window optiunea Open Perspective, apoi Other. Selectati PHP.
* Dupa aparitia interfetei pentru PHP puteti incepe sa creati scripturile PHP.
Editorul Eclipse este centrat pe ideea de proiect. Un proiect reprezinta o suita de mai multe fisiere PHP, grupate in acelasi director. Este indicat sa creati un proiect nou pentru fiecare lectie de PHP de pe site, in care puteti salva mai multe scripturi diferite.
Pentru a crea un proiect nou, alegeti din meniul File optiunea New, apoi Other. Din categoria PHP alegeti PHP Project si completati detaliile. Odata creat proiectul, puteti adauga scripturi noi (din File, New, PHP File). Puteti vedea unde este salvat pe disc un fisier accesand proprietatile lui (click-dreapta, Properties).
Nota: pentru a putea accesa prin localhost scripturile create cu Eclipse, este necesar ca acestea sa fie salvate in document root.
### Netbeans
Netbeans vine din aceeasi zona a programarii in Java. Este un IDE complet ce a fost extins recent pentru lucrul in PHP. Ca si Eclipse poate fi folosit pe orice sistem de operare intrucat ruleaza pe platforma Java. Poate fi descarcat gratuit de la http://netbeans.org/downloads/ (alegeti pachetul PHP).
Modul de lucru in Netbeans este similar cu cel din Eclipse. Avem si aici conceptul de proiect iar pentru a putea crea scripturi PHP este nevoie sa creati in prealabil un proiect de tip PHP. Toate celelalte observatii facute la editorul Eclipse raman valabile si aici.
### Zend Studio si Adobe Dreamweaver
Zend Studio si Adobe Dreamweaver sunt doua IDE-uri comerciale pentru programarea in PHP. Zend Studio este bazat pe Eclipse PDT si specializat doar pe dezvoltarea in PHP, pe cand Adobe Dreamweaver poate fi folosit si pentru web-design. Ambele editoare pot fi descarcate pentru teste (trial version), dar pentru a continua folosirea lor trebuie achizitonata o licenta.
### Editoare simple
Pe langa IDE-urile de mai sus, exista o suita de editoare text simple, gratuite, ce ofera suport pentru programarea in PHP. Nu sunt la fel de avansate ca Eclipse sau Netbeans, dar au suficiente facilitati pentru a putea fi folosite cu succes zi de zi pentru a scrie cod-sursa in PHP.
Cele mai cunoscute dintre acestea sunt:
* Notepad++ (Windows)
* PsPad (Windows)
* SciTE (Windows si Linux)
* Crimson Editor (Windows)
Pentru incepatori un editor PHP simplu este de cele mai multe ori alegerea ideala, datorita simplitatii modului de lucru si a interfetei grafice. Un IDE avansat poate avea mai multe facilitati, dar deprinderea cu folosirea lui poate lua ceva timp. In schimb, la majoritatea editoarelor simple nu exista conceptul de proiect si se lucreaza direct cu fisiere simple ce pot fi grupate manual in directoare.
Codul PHP, ca de altfel orice cod-sursa scris in alte limbaje de programare, poate fi avea o multitudine de forme, in functie de stilul de programare al fiecaruia. Pentru a face acest cod usor de citit si de inteles, este foarte important sa se respecte o serie de conventii.
Scrierea de cod clar si care sa respecte standarde bine-cunoscute este benefica atat pentru autorul codului (in special atunci cand va trebui sa opereze modificari), cat si pentru alti programatori care vor avea acces la cod (programatorii noi vor intelege mai usor un cod-sursa scris bine).
Mai jos sunt prezentate cateva din cele mai uzuale conventii respectate de o mare parte a programatorilor.
## Organizarea codului PHP 0
Este general acceptat ca organizarea codului in mai multe fisiere si directoare este primul pas in a avea un proiect usor de mentinut si de modificat. Unele limbaje de programare, cum este Java, impun reguli stricte referitoare la plasarea fisierelor sursa in directoare. In PHP insa, nu exista restrictii iar programatorii sunt liberi sa-si organizeze codul cum vor. Aceasta libertate poate avea insa si dezavantaje: permite anumitor programatori sa construiasca proiecte neingrijite.
De aceea este nevoie sa fie respectate cateva reguli legate de organizarea codului.
### Mai multe fisiere sursa
Separati codul PHP in mai multe fisiere, in functie de ceea ce face fiecare portiune de cod. Folositi cat de multe fisiere auxiliare aveti nevoie - este mai bine ca un fisier sa fie concis, sa contina cod putin si usor de citit, decat sa aiba sute de linii de cod greu de urmarit. Mai mult, numele fisierelor trebuie sa fie sugestive si conforme cu destinatia acestora.
Exemplu de fisiere ce pot exista intr-un proiect:
* config.php - contine configurari, definiri de constante sau variabile globale
* auth.php - verificari de acces, autentificare si altele
* db.php - ofera acces la baza de date
* util.php - defineste functii utilitare
* search.php - defineste functii de cautare
* log.php - defineste functii pentru loguri
* index.php - fisier principal ce le include pe toate celelalte
Aceasta lista nu este nici completa, nici restrictiva; in functie de proiect, fiecare isi poate defini oricate fisiere crede de cuviinta. Important este ca ele sa fie justificate iar codul continut sa aiba legatura cu destinatia lor (spre exemplu, nu ar fi tocmai potrivit ca o functie de cautare in baza de date sa fie scrisa in fisierul config.php).
### Gruparea in directoare
Odata cu crearea mai multor fisiere apare si nevoia de a le grupa in mai multe foldere in functie de destinatia lor. Este mai usor de gasit un fisier si de inteles destinatia acestuia atunci cand face parte dintr-un director cu un nume sugestiv.
Exemplu de structura a directoarelor dintr-un proiect PHP:
```
config/
|- site.php
|- db.php
utils/
|- auth.pgp
|- db.php
|- util.php
|- log.php
|- search.php
pages/
|- about.php
|- portfolio.php
|- contact.php
|- search.php
admin/
|- users.php
|- posts.php
data/
|- portfolio.xml
|- products.xml
resources/
|- site.css
|- logo.png
|- favicon.png
index.php
```
### Separarea fisierelor PHP de alte resurse
In exemplul de mai sus se poate vedea ca exista foldere separate pentru fisiere non-PHP. Este important ca toate aceste fisiere sa nu fie puse in aceleasi directoare cu fisierele PHP din mai multe considerente: organizare, securitate, flexibilitate in configurarea serverului web, etc. Incercati pe cat posibil sa tineti fisierele sursa separate de alte resurse statice.
## Formatarea codului sursa 0
Un factor decisiv care poate face diferenta intre un cod-sursa clar si un cod-sursa greu de inteles il reprezinta stilul de programare. Este adevarat ca fiecare programator are propriul stil de a scrie cod, dar respectand regulile general-acceptate de formatare a codului, atunci riscul de a scrie cod prost si chiar de a face greseli scade considerabil.
Exista mai multe conventii referitoare la forma codului sursa, iar cele mai intalnite sunt prezentate mai jos.
### Identificatori
Identificatorii se refera la numele variabilelor, constantelor, functiilor si alt altor elemente definite de utilizator. Folosirea unei strategii consecvente de alegere a numelor face codul mai lizibil si reduce riscul de a introduce erori, de aceea este de evitat folosirea diferitelor tipare in cadrul aceluiasi proiect.
In functie de tipul de identificator, exista conventii diferite.
* Variabilele. Identificatorii de variabile vor incepe intotdeauna cu litera mica si vor fi reprezentativi pentru ceea ce reprezinta variabila (de exemplu: $varsta, $total, $suma si nu $v, $t sau $s). Numele scurte, de o litera, sunt acceptate doar in cazul iteratiilor ($i, $j, $k), doar ca variabila temporara ce reprezinta lungimea unui sir ($n, $m) si doar pentru a exprima exceptiile ($e). In cazul in care numele variabilei este format din mai multe cuvinte, este indicat ca aceasta sa adopte forma "Camel case": $sumaMaxima, $numeleUtilizatoruluiLogat, $fisierulCeVaFiSters, etc. Sunt intalnite si formele de separare prin underscore, dar nu sunt indicate deoarece fac numele sa fie mai lungi: $numar_de_copii, $versiune_browser, etc.
* Constantele. Identificatorii constantelor sunt scrisi doar cu majuscule iar partile componente se separa prin underscore: MARIME_FISIER, NUMAR_MAXIM_PERMIS, NUME_SITE, etc.
* Functiile. Identificatorii de functii urmeaza aceleasi reguli ca cei de variabile.
* Clasele si interfetele. Identificatorii de clase urmeaza aceleasi reguli ca cei de variabile, cu diferenta ca prima litera este tot timpul majuscula. Exemple de nume de clase: Categorie, Produs, FacturaProforma, etc.
### Identare
Toate liniile din interiorul unui bloc trebuie identate cu un tab in plus fata de identarea curenta. Acest lucru ajuta la identificarea usoara a instructiunilor care fac parte din blocul respectiv si fac codul mai lizibil.
Exemplu de cod-sursa cu identare (recomandat):
Exemplu de cod-sursa fara identare (nerecomandat):
### Paranteze
Exista situatii cand folosirea parantezelor nu este obligatorie, dar cu toate acestea, este recomandat ca acestea sa fie folosite chiar si acolo unde nu sunt necesare, pentru a spori lizibilitatea. Mai mult decat atat, folosirea acoladelor la instructiunea if elimina confuziile legate de ce instructiuni se executa pe fiecare ramura.
Exemplu de situatie unde folosirea parantezelor rotunde ajuta la marirea lizibilitatii codului:
```
// recomandat
$a = ($b = 2);
$x = (isset($varsta) ? ($varsta + 18) : 18);
if( ($a > 1) && ($b > 2) && ($c == 3) ) {
echo "Tutorial PHP";
}
// nerecomandat
$a = $b = 2;
$x = isset($varsta) ? $varsta + 18 : 18;
if( $a > 1 && $b > 2 && $c == 3 ) {
echo "Tutorial PHP";
}
```
### Spatii albe si linii noi
In limbajul PHP spatiile albe si randurile libere nu influenteaza evaluarea codului. Astfel pot fi inserate oricate spatii sau linii noi in codul sursa pentru a-l face mai usor de urmarit si inteles. Folosirea lor tine de fiecare programator, dar o recomandare uzuala este ca elementele unei expresii compuse sa fie separate printr-un spatiu alb, iar instructiunile care fac lucruri diferite sa fie separate prin randuri noi.
Exemplu de folosire a spatiilor in expresii:
```
// recomandat
$a = $n - 1;
if( ($a > 1) && ($b > 2) && ($c == 3) ) {
echo "Tutorial PHP";
}
// nerecomandat
$a=$n-1;
if(($a>1)&&($b>2)&&($c==3)){
echo "Tutorial PHP";
}
```
Exemplu de folosire a liniilor noi:
```
echo "php.punctsivirgula.ro - "
echo "Tutorial PHP"
// randul nou separa instructiunile echo de instructiunea if
if( isset($limba) ) {
echo " in limba romana";
}
```
### Lungimea liniilor
Pentru a mentine codul sursa vizibil pe toate tipurile de monitoare este indicat ca liniile sa nu depaseasca 80 de caractere. Orice linie mai lunga de aceasta limita va deveni greu de urmarit.
### Instructiuni de control
Partile componente ale instructiunilor de control if-else, while si do-while, trebuie scrise pe linii diferite, chiar si atunci cand contin doar o instructiune. De asemenea folosirea acoladelor este incurajata pentru ca reduce riscul de a introduce erori in cod.
```
// recomandat
if($conditie) {
echo 'True';
} else {
echo 'False';
}
while($i > 0) {
echo $i++;
}
// nerecomandat
if($conditie) echo 'True'; else echo 'False';
while($i>0) echo $i++;
```
### Comentarii
Comentariile ce contin explicatii scurte si care se intind pe cel mult o linie sau doua pot fi delimitate de // sau #. Acestea sunt de obicei folosite pentru a descrie destinatia unei variabile sau actiunea instructiunilor imediat urmatoare. Comentariile mai elaborate ce descrie un algoritm sau o procedura trebuie delimitate de /* si */.
Este recomandat ca toate comentariile sa fie scrise, pe cat posibil, pe randuri noi si nu la sfarsitul instructiunilor sau in interiorul lor.
```
// recomandat
// datele din formular
$date = $_POST['email'];
// validam datele introduce se utilizator
if( !empty($date) ) {
echo 'Valid';
}
// nerecomandat
$date = $_POST['email']; // datele din formular
if( !empty($date) ) { // validam datele introduce se utilizator
echo 'Valid';
}
```
Comentariile sunt utile in orice portiune a codului sursa si este chiar indicat sa fie folosite ori de cate ori este nevoie. Trebuie evitate insa comentariile inutile ce nu aduc nici o informatie noua sau care sunt folosite pentru a descrie o constructie evidenta.
Exemple de comentarii inutile:
```
// initializam variabila
$a = 1;
// verificam daca $a este definita
if(isset($a)) {
// afisam $a
echo $a
}
```
## Practici de programare web 0
Mai jos sunt prezentate cateva practici utile oricarui programator sau dezvoltator web.
### Descrierea codului
Scrieti cat mai multe comentarii in care sa explicati ce anume va executa codul sursa. O practica utila este ca la inceputul fiecarui fisier sa fie descris succint continutul si functionalitatea fisierului. De asemenea, toate functiile a caror functionalitate nu este evidenta trebuie descrise printr-un comentariu.
### Separarea ideilor
Impartiti problemele mari in parti mai mici ce pot fi implementate prin functii. Daca o functie devine prea mare (codul sursa se intinde pe cateva zeci sau sute de linii) atunci inseamna ca este prea complexa si ca este un candidat bun la impartirea in mai multe functii mai simple.
De asemenea, este bine ca functionalitatile asemanatoare sa fie separate, fizic, in fisiere diferite.
### Separarea design-ului de logica de business
O practica des intalnita in programarea web consta in separarea codului de prelucrare a datelor de cel responsabil cu afisarea lui. Acest lucru ofera o flexibilitate mai mare, in special in situatia in care trebuie aplicate schimbari in oricare din cele doua parti. De asemenea, permite programatorilor web sa lucreze oarecum independent de designerii web.
Exista mai multe modalitati prin care se poate realiza efectiv aceasta separare: fie folosind un sistem de sablonare (ce foloseste template-uri), fie separand pur si simplu instructiunile de afisare de restul codului sursa. Prima varianta implica o complexitate mai mare si devine utila doar pentru proiectele de mari dimensiuni, in timp ce varianta a doua poate fi aplicata in orice proiect de orice dimensiuni.
Un exemplu de separare este ilustrat mai jos:
```
11 && $h < 18 ) {
$mesaj = "Buna ziua";
} elseif( $h >= 18 ) {
$mesaj = "Buna seara";
}
// cod de afisare
print <<
```
Tutorial PHP - php.punctsivirgula.ro$mesaj, $nume!Azi este: $data
TXT;
?>
Tutorial PHP - php.punctsivirgula.ro$mesaj, $nume!
$mesaj, $nume! Azi este: $data
TXT;
?## Aplicatii si utilitare 2
Mai jos sunt prezentate o serie de aplicatii sau extensii ce pot fi utile oricarui programator PHP, fie el incepator sau avansat.
### Eclipse Remote System Explorer
Remote System Explorer este o extensie a editorului Eclipse ce ofera posibilitatea de conectare si transfer de date/fisiere la distanta prin diferite protocoale web: FTP, SFTP, SSH, etc. Este util cand este nevoie sa transferati scripturile voastre PHP pe un server de hosting.
RSE se poate instala direct din Eclipse, din meniul Help -> Install New Software. Mai multe detalii despre aceasta extensie la http://wiki.eclipse.org/TM_and_RSE_FAQ
### Color Picker
Pixie este un color-picker ce afiseaza culoarea obiectului sau ferestrei deasupra carora se afla cursorul mouse-ului. Este foarte util in alegerea schemei de culori pentru orice element al unei pagini web. Poate fi descarcat de la http://nattyware.com/pixie.php.
O alternativa online, ce foloseste un set predefinit de culori, este disponibila pe site-ul w3schools.com.
### Web Developer Tools
Majoriatatea browserlor moderne dispun de utilitare pentru inspectarea, depanarea si analizarea paginilor web si chiar pentru modificarea lor temporara. Aceste facilitati sunt utile atat pentru web-designeri cat si pentru programatori web.
In functie de browserul folosit, utilitarele se numesc diferit, desi toate sunt similare ca si functionalitate. De obicei pot fi accesate din browser apasand F12 dupa ce o pagina s-a incarcat complet. Acestea sunt: Firebug pentru Firefox (extensie separata), Developer Tools pentru Chrome sau Internet Explorer (preinstalat), DragonFly pentru Opera (preinstalat).
### GIT
Git reprezinta un sistem de management al codului (SCM) si de control al versiunii (VCS) ce permite inregistrarea tuturor modificarilor facute fisierelor. Git ofera multe alte facilitati si este util in special atunci cand sunt mai mult programatori care lucreaza la acelasi proiect. Mai multe detalii despre Git la git-scm.com.
Ca alternativa la Git se poate folosi Bazaar, Mercurial sau mai vechiul Subversion.
Ar fi fost dragut sa fie prezentate si explicate si notiuni de genul "framework", "CMS", "MCV" etc. Cum se folosesc, cum interactioneaza unele cu altele, care e diferenta dintre unele si altele etc. Informatiile astea se gasesc pe internet, sunt convinsa, dar ar fi fost dragut sa fie prezentate toate la un loc ca pentru... incepatori. :D
Subscriu. Au facut o treaba excelenta si merita continuata.
## Servicii 4
### Partajarea codului PHP
In momentul in care lucrati la un proiect mai complex si vreti sa-l partajati cu alte persoane, aveti nevoie ca fisierele sursa sa fie stocate intr-o locatie usor accesibila pentru toata lumea. Ideal, asta inseamna undeva in internet.
Din fericire, exista site-uri ce ofera gratuit servicii de stocare a codului sursa, astfel incat acesta sa poata fi accesat din internet. Un astfel de site este bitbucket.com, care permite un numar nelimitat de proiecte, atat publice cat si personale (private). Administrarea codului sursa se face exclusiv folosind o aplicatie de tip SCM (Git sau Mercurial), ceea ce permite o evidenta clara a tuturor modificarilor facute asupra fisierelor dintr-un proiect. Inregistrarea pe site-ul BitBucket este gratuita, la fel ca si crearea proiectelor. Singura limitare o reprezinta numarul de persoane cu care se poate partaja codul-sursa privat (5 persoane pentru conturile gratuite).
Ca alternative la serviciile bitbucket.com se poate folosi:
* github.com - varianta gratuita permite doar proiecte publice (open source); contra cost se pot crea si proiecte private; administrarea codului se face doar prin Git
* launchpad.net - varianta gratuita permite doar proiecte publice (open source); contra cost se pot crea si proiecte private; administrarea codului se face doar prin Bazaar
* code.google.com - permite doar proiecte publice (open source); administrarea codului se poate face prin Git, Mercurial sau Subversion
* sourceforge.net - permite doar proiecte publice (open source); administrarea codului se poate face prin Git, Mercurial, Bazaar, Subversion sau CVS
In cazul in care nu doriti sa gestionati codul folosind un SCM (ca Git sau Subversion), puteti opta pentru servicii clasice de web-hosting de la diverse companii, unde puteti incarca scripturile PHP direct folosind FTP. Aceste servicii sunt insa, de cele mai multe ori, contra cost.
### Testarea online a codului PHP
Site-ul ideone.com ofera posibilitatea testarii secventelor de cod PHP direct online, fara a fi nevoie sa instalati PHP local. Dispune de o multitudine de alte limbaje de programare.
Multumesc frumos pentru acest tutorial. M-a ajutat enorm!
// Multumiri pentru tutorial .// Cursiv si usor de citit . thx ! // :) Excelent tutorial, foarte bine explicat si pe intelesul incepatorilor.N-ar fi rau sa adaugati si un curs de HTML, CSS si baze de date. Buna ziua,Prin acest comentariu doresc sa va multumesc frumos pentru acest tutorial deosebit de util pentru cineva care doreste sa invete de la zero programarea in PHP. Pe mine m-a ajutat enorm in invtarea si intelegerea limbajului PHP. Multe multumiri inca o data si sper, ca in viitor sa revin pe acest site pentru invatarea altui limbaj de programare web (poate JavaScript, CSS, HTML etc.) Numai bine! Denis
Adauga un comentariu la aceasta sectiune.
|
webshot2 | cran | R | Package ‘webshot2’
August 12, 2023
Title Take Screenshots of Web Pages
Version 0.1.1
Description Takes screenshots of web pages, including Shiny applications
and R Markdown documents. 'webshot2' uses headless Chrome or Chromium
as the browser back-end.
License GPL-2
URL https://rstudio.github.io/webshot2/,
https://github.com/rstudio/webshot2
BugReports https://github.com/rstudio/webshot2/issues
Depends R (>= 3.2)
Imports callr, chromote (>= 0.1.0), later, magrittr, promises
Suggests httpuv, rmarkdown, shiny
Encoding UTF-8
Language en-US
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [ctb] (<https://orcid.org/0000-0001-9986-114X>),
Posit Software, PBC [cph, fnd]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-08-11 22:00:02 UTC
R topics documented:
appsho... 2
resiz... 3
rmdsho... 4
shrin... 5
websho... 6
appshot Take a screenshot of a Shiny app
Description
appshot performs a webshot using two different methods depending upon the object provided. If a
’character’ is provided (pointing to an app.R file or app directory) an isolated background R process
is launched to run the Shiny application. The current R process then captures the webshot. When
a Shiny application object is supplied to appshot, it is reversed: the Shiny application runs in the
current R process and an isolated background R process is launched to capture a webshot. The
reason it is reversed in the second case has to do with scoping: although it would be preferable to
run the Shiny application in a background process and call webshot from the current process, with
Shiny application objects, there are potential scoping errors when run this way.
Usage
appshot(
app,
file = "webshot.png",
...,
port = getOption("shiny.port"),
envvars = NULL
)
## S3 method for class 'character'
appshot(
app,
file = "webshot.png",
...,
port = getOption("shiny.port"),
envvars = NULL
)
## S3 method for class 'shiny.appobj'
appshot(
app,
file = "webshot.png",
...,
port = getOption("shiny.port"),
envvars = NULL,
webshot_timeout = 60
)
Arguments
app A Shiny app object, or a string naming an app directory.
file A vector of names of output files. Should end with an image file type (.png,
.jpg, .jpeg, or .webp) or .pdf. If several screenshots have to be taken and
only one filename is provided, then the function appends the index number of
the screenshot to the file name. For PDF output, it is just like printing the page
to PDF in a browser; selector, cliprect, expand, and zoom will not be used
for PDFs.
... Other arguments to pass on to webshot.
port Port that Shiny will listen on.
envvars A named character vector or named list of environment variables and values to
set for the Shiny app’s R process. These will be unset after the process exits.
This can be used to pass configuration information to a Shiny app.
webshot_timeout
The maximum number of seconds the phantom application is allowed to run
before killing the process. If a delay argument is supplied (in ...), the delay
value is added to the timeout value.
Value
Invisibly returns the normalized path to all screenshots taken. The character vector will have a class
of ‘"webshot"‘.
Examples
if (interactive()) {
appdir <- system.file("examples", "01_hello", package="shiny")
# With a Shiny directory
appshot(appdir, "01_hello.png")
# With a Shiny App object
shinyapp <- shiny::shinyAppDir(appdir)
appshot(shinyapp, "01_hello_app.png")
}
resize Resize an image
Description
This does not change size of the image in pixels, nor does it affect appearance – it is lossless
compression. This requires GraphicsMagick (recommended) or ImageMagick to be installed.
Usage
resize(filename, geometry)
Arguments
filename Character vector containing the path of images to resize.
geometry Scaling specification. Can be a percent, as in "50%", or pixel dimensions like
"120x120", "120x", or "x120". Any valid ImageMagick geometry specification
can be used. If filename contains multiple images, this can be a vector to
specify distinct sizes for each image.
Value
The ‘filename‘ supplied but with a class value of ‘"webshot"‘.
Examples
if (interactive()) {
# Can be chained with webshot() or appshot()
webshot("https://www.r-project.org/", "r-small-1.png") %>%
resize("75%")
# Generate image that is 400 pixels wide
webshot("https://www.r-project.org/", "r-small-2.png") %>%
resize("400x")
}
rmdshot Take a snapshot of an R Markdown document
Description
This function can handle both static Rmd documents and Rmd documents with runtime: shiny.
Usage
rmdshot(
doc,
file = "webshot.png",
...,
delay = NULL,
rmd_args = list(),
port = getOption("shiny.port"),
envvars = NULL
)
Arguments
doc The path to a Rmd document.
file A vector of names of output files. Should end with an image file type (.png,
.jpg, .jpeg, or .webp) or .pdf. If several screenshots have to be taken and
only one filename is provided, then the function appends the index number of
the screenshot to the file name. For PDF output, it is just like printing the page
to PDF in a browser; selector, cliprect, expand, and zoom will not be used
for PDFs.
... Other arguments to pass on to webshot.
delay Time to wait before taking screenshot, in seconds. Sometimes a longer delay is
needed for all assets to display properly. If NULL (the default), then it will use
0.2 seconds for static Rmd documents, and 3 seconds for Rmd documents with
runtime:shiny.
rmd_args A list of additional arguments to pass to either render (for static Rmd docu-
ments) or run (for Rmd documents with runtime:shiny).
port Port that Shiny will listen on.
envvars A named character vector or named list of environment variables and values to
set for the Shiny app’s R process. These will be unset after the process exits.
This can be used to pass configuration information to a Shiny app.
Value
Invisibly returns the normalized path to all screenshots taken. The character vector will have a class
of ‘"webshot"‘.
Examples
if (interactive()) {
# R Markdown file
input_file <- system.file("examples/knitr-minimal.Rmd", package = "knitr")
rmdshot(input_file, "minimal_rmd.png")
# Shiny R Markdown file
input_file <- system.file("examples/shiny.Rmd", package = "webshot")
rmdshot(input_file, "shiny_rmd.png", delay = 5)
}
shrink Shrink file size of a PNG
Description
This does not change size of the image in pixels, nor does it affect appearance – it is lossless
compression. This requires the program optipng to be installed.
Usage
shrink(filename)
Arguments
filename Character vector containing the path of images to resize. Must be PNG files.
Details
If other operations like resizing are performed, shrinking should occur as the last step. Otherwise,
if the resizing happens after file shrinking, it will be as if the shrinking didn’t happen at all.
Value
The ‘filename‘ supplied but with a class value of ‘"webshot"‘.
Examples
if (interactive()) {
webshot("https://www.r-project.org/", "r-shrink.png") %>%
shrink()
}
webshot Take a screenshot of a URL
Description
Take a screenshot of a URL
Usage
webshot(
url = NULL,
file = "webshot.png",
vwidth = 992,
vheight = 744,
selector = NULL,
cliprect = NULL,
expand = NULL,
delay = 0.2,
zoom = 1,
useragent = NULL,
max_concurrent = getOption("webshot.concurrent", default = 6),
quiet = getOption("webshot.quiet", default = FALSE)
)
Arguments
url A vector of URLs to visit. If multiple URLs are provided, it will load and take
screenshots of those web pages in parallel.
file A vector of names of output files. Should end with an image file type (.png,
.jpg, .jpeg, or .webp) or .pdf. If several screenshots have to be taken and
only one filename is provided, then the function appends the index number of
the screenshot to the file name. For PDF output, it is just like printing the page
to PDF in a browser; selector, cliprect, expand, and zoom will not be used
for PDFs.
vwidth Viewport width. This is the width of the browser "window".
vheight Viewport height This is the height of the browser "window".
selector One or more CSS selectors specifying a DOM element to set the clipping rect-
angle to. The screenshot will contain these DOM elements. For a given selector,
if it has more than one match, all matching elements will be used. This option
is not compatible with cliprect. When taking screenshots of multiple URLs,
this parameter can also be a list with same length as url with each element of
the list containing a vector of CSS selectors to use for the corresponding URL.
cliprect Clipping rectangle. If cliprect and selector are both unspecified, the clip-
ping rectangle will contain the entire page. This can be the string "viewport",
in which case the clipping rectangle matches the viewport size, or it can be a
four-element numeric vector specifying the left, top, width, and height. (Note
that the order of left and top is reversed from the original webshot package.)
When taking screenshots of multiple URLs, this parameter can also be a list
with same length as url with each element of the list being "viewport" or a
four-elements numeric vector. This option is not compatible with selector.
expand A numeric vector specifying how many pixels to expand the clipping rectangle
by. If one number, the rectangle will be expanded by that many pixels on all
sides. If four numbers, they specify the top, right, bottom, and left, in that order.
When taking screenshots of multiple URLs, this parameter can also be a list with
same length as url with each element of the list containing a single number or
four numbers to use for the corresponding URL.
delay Time to wait before taking screenshot, in seconds. Sometimes a longer delay is
needed for all assets to display properly.
zoom A number specifying the zoom factor. A zoom factor of 2 will result in twice as
many pixels vertically and horizontally. Note that using 2 is not exactly the same
as taking a screenshot on a HiDPI (Retina) device: it is like increasing the zoom
to 200 doubling the height and width of the browser window. This differs from
using a HiDPI device because some web pages load different, higher-resolution
images when they know they will be displayed on a HiDPI device (but using
zoom will not report that there is a HiDPI device).
useragent The User-Agent header used to request the URL.
max_concurrent (Currently not implemented)
quiet If ‘TRUE‘, status updates via console messages are suppressed.
Value
Invisibly returns the normalized path to all screenshots taken. The character vector will have a class
of ‘"webshot"‘.
Examples
if (interactive()) {
# Whole web page
webshot("https://github.com/rstudio/shiny")
# Might need a delay for all assets to display
webshot("http://rstudio.github.io/leaflet", delay = 0.5)
# One can also take screenshots of several URLs with only one command.
# This is more efficient than calling 'webshot' multiple times.
webshot(c("https://github.com/rstudio/shiny",
"http://rstudio.github.io/leaflet"),
delay = 0.5)
# Clip to the viewport
webshot("http://rstudio.github.io/leaflet", "leaflet-viewport.png",
cliprect = "viewport")
# Specific size
webshot("https://www.r-project.org", vwidth = 1600, vheight = 900,
cliprect = "viewport")
# Manual clipping rectangle
webshot("http://rstudio.github.io/leaflet", "leaflet-clip.png",
cliprect = c(200, 5, 400, 300))
# Using CSS selectors to pick out regions
webshot("http://rstudio.github.io/leaflet", "leaflet-menu.png", selector = ".list-group")
# With multiple selectors, the screenshot will contain all selected elements
webshot("http://reddit.com/", "reddit-top.png",
selector = c("[aria-label='Home']", "input[type='search']"))
# Expand selection region
webshot("http://rstudio.github.io/leaflet", "leaflet-boxes.png",
selector = "#installation", expand = c(10, 50, 0, 50))
# If multiple matches for a given selector, it will take a screenshot that
# contains all matching elements.
webshot("http://rstudio.github.io/leaflet", "leaflet-p.png", selector = "p")
webshot("https://github.com/rstudio/shiny/", "shiny-stats.png",
selector = "ul.numbers-summary")
# Result can be piped to other commands like resize() and shrink()
webshot("https://www.r-project.org/", "r-small.png") %>%
resize("75%") %>%
shrink()
webshot 9
} |
lbd-mindmap | npm | JavaScript | 思维导图Vue2组件
===
> 一个由[MindNode](https://mindnode.com)启发的思维导图Vue组件,基于d3.js实现
> 目前实现的功能有基本的编辑、拖移、缩放、撤销、上下文菜单、折叠...
在线演示:<https://mindnode.5xin.xyz/近期更新
---
> 该项目基本不会再维护
> 目前正在开发Vue3、d3v6版本的[思维导图组件](https://github.com/hellowuxin/vue3-mindmap),欢迎支持
安装
---
```
npm install @hellowuxin/mindmap
```
PROPS
---
| Name | Type | Default | Description |
| --- | --- | --- | --- |
| v-model | Array | undefined | 设置思维导图数据 |
| width | Number | 100% | 设置组件宽度 |
| height | Number | undefined | 设置组件高度 |
| xSpacing | Number | 80 | 设置节点横向间隔 |
| ySpacing | Number | 20 | 设置节点纵向间隔 |
| strokeWidth | Number | 4 | 设置连线的宽度 |
| draggable | Boolean | true | 设置节点是否可拖拽 |
| gps | Boolean | true | 是否显示居中按钮 |
| fitView | Boolean | true | 是否显示缩放按钮 |
| showNodeAdd | Boolean | true | 是否显示添加节点按钮 |
| keyboard | Boolean | true | 是否响应键盘事件 |
| contextMenu | Boolean | true | 是否响应右键菜单 |
| zoomable | Boolean | true | 是否可缩放、拖移 |
| showUndo | Boolean | true | 是否显示撤销重做按钮 |
| download | Boolean | true | 是否显示下载按钮 |
EVENTS
---
| Name | arguments | Description |
| --- | --- | --- |
| updateNodeName | data, id | 更新节点名称时,传入节点数据和节点id |
| click | data, id | 点击节点时,传入节点数据和节点id |
样例
---
```
<template>
<mindmap v-model="data"></mindmap>
</template<script>
import mindmap from '@hellowuxin/mindmap'
export default {
components: { mindmap },
data: () => ({
data: [{
"name":"如何学习D3",
"children": [
{
"name":"预备知识",
"children": [
{ "name":"HTML & CSS" },
{ "name":"JavaScript" },
...
]
},
{
"name":"安装",
"_children": [
{ "name": "折叠节点" }
]
},
{
"name":"进阶",
"left": true
},
...
]
}]
})
}
</script>
```
键盘事件
---
`⇥ tab`、`⏎ enter`、`⌫ backspace`、`⌘ cmd`+`z`、`⌘ cmd`+`y`
交互逻辑
---
**鼠标**:space+左键移动、右键菜单、ctrl+滚轮缩放、左键选中
**触控板**:双指滚动移动、双指菜单、双指捏合缩放、单指选中
待解决
---
* [ ] 导出多种格式
* [ ] 设置节点的宽高
* [ ] 多个根节点
* [ ] ...
Readme
---
### Keywords
* mindmap
* vue
* d3 |
normalizeH | cran | R | Package ‘normalizeH’
October 13, 2022
Version 1.0.0
Date 2022-07-20
Title Normalize Hadamard Matrix
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Depends R (>= 4.2.0)
Suggests HadamardR
Description Normalize a given Hadamard matrix. A Hadamard matrix is said to be normal-
ized when its first row and first column entries are all 1, see Hedayat, A. and Wal-
lis, W. D. (1978) ``Hadamard matrices and their applications. The Annals of Statistics, 1184-
1238.'' <doi:10.1214/aos/1176344370>.
License GPL (>= 2)
NeedsCompilation no
Repository CRAN
Date/Publication 2022-07-21 12:30:02 UTC
R topics documented:
normalize... 1
normalizeH Normalized Hadamard Matrix
Description
Converts a given Hadamard matrix to its normalized form
Usage
normalizeH(H)
Arguments
H A Hadamard matrix
Value
A normalize Hadamard matrix of same dimension as the input matrix.
Author(s)
<NAME> <<EMAIL>>
Examples
H = matrix(c(1,1,1,-1),nrow = 2)
normalizeH(H)
require(HadamardR)
h8 <- Hadamard_Matrix(8)
normalizeH(h8) |
bvartools | cran | R | Package ‘bvartools’
August 31, 2023
Title Bayesian Inference of Vector Autoregressive and Error Correction
Models
Version 0.2.3
Date 2023-08-23
Description
Assists in the set-up of algorithms for Bayesian inference of vector autoregressive (VAR) and er-
ror correction (VEC) models. Functions for posterior simulation, forecasting, impulse re-
sponse analysis and forecast error variance decomposition are largely based on the introduc-
tory texts of Chan, Koop, Poirier and Tobias (2019, ISBN: 9781108437493), Koop and Koro-
bilis (2010) <doi:10.1561/0800000013> and Luetkepohl (2006, ISBN: 9783540262398).
License GPL (>= 2)
Depends R (>= 3.4.0), coda
Imports grDevices, graphics, methods, parallel, Rcpp (>= 0.12.14),
stats
LinkingTo Rcpp, RcppArmadillo
Encoding UTF-8
RoxygenNote 7.2.3
URL https://github.com/franzmohr/bvartools
BugReports https://github.com/franzmohr/bvartools/issues
Suggests knitr, rmarkdown
VignetteBuilder knitr
Collate 'RcppExports.R' 'add_priors.R' 'add_priors.bvarmodel.R'
'add_priors.bvecmodel.R' 'add_priors.dfmodel.R' 'bvar.R'
'bvar_fill_helper.R' 'bvarpost.R' 'bvartools-package.R'
'bvec.R' 'bvec_to_bvar.R' 'bvecpost.R' 'data.R' 'dfm.R'
'dfmpost.R' 'draw_posterior.R' 'draw_posterior.bvarmodel.R'
'draw_posterior.bvecmodel.R' 'draw_posterior.dfmodel.R'
'fevd.R' 'fevd.bvar.R' 'gen_dfm.R' 'gen_var.R' 'gen_vec.R'
'get_regressor_names.R' 'inclusion_prior.R' 'irf.R'
'irf.bvar.R' 'minnesota_prior.R' 'plot.bvar.R'
'plot.bvarfevd.R' 'plot.bvarirf.R' 'plot.bvarlist.R'
'plot.bvarprd.R' 'plot.bvec.R' 'plot.dfm.R' 'predict.bvar.R'
'summary.bvar.R' 'print.summary.bvar.R' 'summary.bvec.R'
'print.summary.bvec.R' 'ssvs_prior.R' 'summary.bvarlist.R'
'summary.dfm.R' 'thin.bvar.R' 'thin.bvarlist.R' 'thin.bvec.R'
'thin.dfm.R' 'tvpribbon.R' 'zzz.R'
NeedsCompilation yes
Author <NAME> [aut, cre] (0009-0003-8890-7781)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-08-30 22:20:04 UTC
R topics documented:
add_prior... 3
add_priors.bvarmode... 4
add_priors.bvecmode... 8
add_priors.dfmode... 12
bem_dfmdat... 14
bva... 14
bvarpos... 20
bve... 21
bvecpos... 28
bvec_to_bva... 29
bv... 32
df... 33
dfmpos... 36
draw_posterio... 37
draw_posterior.bvarmode... 37
draw_posterior.bvecmode... 38
draw_posterior.dfmode... 39
e... 40
e... 41
fev... 42
fevd.bva... 43
gen_df... 45
gen_va... 46
gen_ve... 48
inclusion_prio... 50
ir... 52
irf.bva... 53
kalman_d... 55
loglik_norma... 57
minnesota_prio... 58
plot.bvarlis... 60
plot.bvarpr... 60
post_coint_kl... 61
post_coint_kls_su... 64
post_norma... 66
post_normal_su... 68
ssv... 69
ssvs_prio... 71
stochvol_ksc199... 72
stochvol_ocsn200... 74
stoch_vo... 75
summary.bva... 76
summary.bvarlis... 77
summary.bve... 78
summary.df... 79
thin.bva... 79
thin.bvarlis... 80
thin.bve... 81
thin.df... 82
us_macrodat... 83
add_priors Add Priors to Bayesian Models A generic function used to generate
prior specifications for a list of models. The function invokes particu-
lar methods which depend on the class of the first argument.
Description
Add Priors to Bayesian Models
A generic function used to generate prior specifications for a list of models. The function invokes
particular methods which depend on the class of the first argument.
Usage
add_priors(object, ...)
Arguments
object an object of class "bvarmodel" or "bvecmodel".
... arguments passed forward to method.
Examples
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Obtain data matrices
model <- gen_var(e1, p = 2, deterministic = 2,
iterations = 100, burnin = 10)
# Chosen number of iterations and burn-in draws should be much higher.
# Add prior specifications
model <- add_priors(model)
add_priors.bvarmodel Add Priors for a Vector Autoregressive Models
Description
Adds prior specifications to a list of models, which was produced by function gen_var.
Usage
## S3 method for class 'bvarmodel'
add_priors(
object,
coef = list(v_i = 1, v_i_det = 0.1, shape = 3, rate = 1e-04, rate_det = 0.01),
sigma = list(df = "k", scale = 1, mu = 0, v_i = 0.01, sigma_h = 0.05),
ssvs = NULL,
bvs = NULL,
...
)
Arguments
object a list, usually, the output of a call to gen_var.
coef a named list of prior specifications for the coefficients of the models. For the de-
fault specification all prior means are set to zero and the diagonal elements of the
inverse prior variance-covariance matrix are set to 1 for coefficients correspond-
ing to non-deterministic and structural terms. For deterministic coefficients the
prior variances are set to 10 via v_i_det = 0.1. The variances need to be spec-
ified as precisions, i.e. as inverses of the variances. For further specifications
such as the Minnesota prior see ’Details’.
sigma a named list of prior specifications for the error variance-covariance matrix of
the models. For the default specification of an inverse Wishart distribution the
prior degrees of freedom are set to the number of endogenous variables and the
prior variances to 1. See ’Details’.
ssvs optional; a named list of prior specifications for the SSVS algorithm. Not al-
lowed for TVP models. See ’Details’.
bvs optional; a named list of prior specifications for the BVS algorithm. See ’De-
tails’.
... further arguments passed to or from other methods.
Details
The arguments of the function require named lists. Possible specifications are described in the
following. Note that it is important to specify the priors in the correct list. Otherwise, the provided
specification will be disregarded and default values will be used.
Argument coef can contain the following elements
v_i a numeric specifying the prior precision of the coefficients. Default is 1.
v_i_det a numeric specifying the prior precision of coefficients corresponding to deterministic
terms. Default is 0.1.
coint_var a logical specifying whether the prior mean of the first own lag of an endogenous
variable should be set to 1. Default is FALSE.
const a numeric or character specifying the prior mean of coefficients, which correspond to the
intercept. If a numeric is provided, all prior means are set to this value. If const = "mean",
the mean of the respective endogenous variable is used as prior mean. If const = "first",
the first values of the respective endogenous variable is used as prior mean.
minnesota a list of length 4 containing parameters for the calculation of the Minnesota prior, where
the element names must be kappa0, kappa1, kappa2 and kappa3. For the endogenous variable
i the prior variance of the lth lag of regressor j is obtained as
κ0
for own lags of endogenous variables,
l2
κ0 κ1 σi2
for endogenous variables other than own lags,
l2 σj2
κ0 κ2 σi2
for exogenous variables,
(l + 1)2 σj2
κ0 κ3 σi2 for deterministic terms,
where σi is the residual standard deviation of variable i of an unrestricted LS estimate. For
exogenous variables σi is the sample standard deviation.
max_var a numeric specifying the maximum prior variance that is allowed for non-deterministic
coefficients.
shape a numeric specifying the prior shape parameter of the error term of the state equation. Only
used for models with time varying parameters. Default is 3.
rate a numeric specifying the prior rate parameter of the error term of the state equation. Only
used for models with time varying parameters. Default is 0.0001.
rate_det a numeric specifying the prior rate parameter of the error term of the state equation for
coefficients, which correspond to deterministic terms. Only used for models with time varying
parameters. Default is 0.01.
If minnesota is specified, v_i and v_i_det are ignored.
Argument sigma can contain the following elements:
df an integer or character specifying the prior degrees of freedom of the error term. Only used, if
the prior is inverse Wishart. Default is "k", which indicates the amount of endogenous vari-
ables in the respective model. "k + 3" can be used to set the prior to the amount of endogenous
variables plus 3. If an integer is provided, the degrees of freedom are set to this value in all
models.
scale a numeric specifying the prior error variance of endogenous variables. Default is 1.
shape a numeric or character specifying the prior shape parameter of the error term. Only used,
if the prior is inverse gamma or if time varying volatilities are estimated. For models with
constant volatility the default is "k", which indicates the amount of endogenous variables in
the respective country model. "k + 3" can be used to set the prior to the amount of endogenous
variables plus 3. If a numeric is provided, the shape parameters are set to this value in all
models. For models with stochastic volatility this prior refers to the error variance of the state
equation.
rate a numeric specifying the prior rate parameter of the error term. Only used, if the prior is in-
verse gamma or if time varying volatilities are estimated. For models with stochastic volatility
this prior refers to the error variance of the state equation.
mu numeric of the prior mean of the initial state of the log-volatilities. Only used for models with
time varying volatility.
v_i numeric of the prior precision of the initial state of the log-volatilities. Only used for models
with time varying volatility.
sigma_h numeric of the initial draw for the variance of the log-volatilities. Only used for models
with time varying volatility.
covar logical indicating whether error covariances should be estimated. Only used in combina-
tion with an inverse gamma prior or stochastic volatility, for which shape and rate must be
specified.
df and scale must be specified for an inverse Wishart prior. shape and rate are required for an
inverse gamma prior. For structural models or models with stochastic volatility only a gamma prior
specification is allowed.
Argument ssvs can contain the following elements:
inprior a numeric between 0 and 1 specifying the prior probability of a variable to be included in
the model.
tau a numeric vector of two elements containing the prior standard errors of restricted variables
(τ0 ) as its first element and unrestricted variables (τ1 ) as its second.
semiautomatic an numeric vector of two elements containing the factors by which the standard
errors associated with an unconstrained least squares estimate of the model are multiplied to
obtain the prior standard errors of restricted (τ0 ) and unrestricted (τ1 ) variables, respectively.
This is the semiautomatic approach described in George et al. (2008).
covar logical indicating if SSVS should also be applied to the error covariance matrix as in George
et al. (2008).
exclude_det logical indicating if deterministic terms should be excluded from the SSVS algo-
rithm.
minnesota a numeric vector of length 4 containing parameters for the calculation of the Minnesota-
like inclusion priors. See below.
Either tau or semiautomatic must be specified.
The argument bvs can contain the following elements
inprior a numeric between 0 and 1 specifying the prior probability of a variable to be included in
the model.
covar logical indicating if BVS should also be applied to the error covariance matrix.
exclude_det logical indicating if deterministic terms should be excluded from the BVS algorithm.
minnesota a numeric vector of length 4 containing parameters for the calculation of the Minnesota-
like inclusion priors. See below.
If either ssvs$minnesota or bvs$minnesota is specified, prior inclusion probabilities are calcu-
lated in a Minnesota-like fashion as
κ1
l for own lags of endogenous variables,
κ2
l for other endogenous variables,
κ3
1+l for exogenous variables,
κ4 for deterministic variables,
for lag l with κ1 , κ2 , κ3 , κ4 as the first, second, third and forth element in ssvs$minnesota or
bvs$minnesota, respectively.
Value
A list of country models.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press.
<NAME>., <NAME>., & <NAME>. (2008). Bayesian stochastic search for VAR model restrictions.
Journal of Econometrics, 142(1), 553–580. doi:10.1016/j.jeconom.2007.08.017
<NAME>. (2013). VAR forecasting using Bayesian variable selection. Journal of Applied Econo-
metrics, 28(2), 204–230. doi:10.1002/jae.1271
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Examples
data("e1")
e1 <- diff(log(e1)) * 100
model <- gen_var(e1, p = 2, deterministic = 2,
iterations = 100, burnin = 10)
model <- add_priors(model)
add_priors.bvecmodel Add Priors for Vector Error Correction Models
Description
Adds prior specifications to a list of models, which was produced by function gen_vec.
Usage
## S3 method for class 'bvecmodel'
add_priors(
object,
coef = list(v_i = 1, v_i_det = 0.1, shape = 3, rate = 1e-04, rate_det = 0.01),
coint = list(v_i = 0, p_tau_i = 1, shape = 3, rate = 1e-04, rho = 0.999),
sigma = list(df = "k", scale = 1, mu = 0, v_i = 0.01, sigma_h = 0.05),
ssvs = NULL,
bvs = NULL,
...
)
Arguments
object a list, usually, the output of a call to gen_vec.
coef a named list of prior specifications for coefficients that do not determine the
cointegration space. For the default specification all prior means are set to zero
and the diagonal elements of the inverse prior variance-covariance matrix are set
to 1 for coefficients corresponding to non-deterministic terms. For deterministic
coefficients the prior variances are set to 10 by v_i_det = 0.1. The variances
need to be specified as precisions, i.e. as inverses of the variances. For further
specifications such as the Minnesota prior see ’Details’.
coint a named list of prior specifications for coefficients determining the cointegration
space of VEC models. See ’Details’.
sigma a named list of prior specifications for the error variance-covariance matrix of
the models. For the default specification of an inverse Wishart distribution the
prior degrees of freedom are set to the number of endogenous variables plus the
rank of the cointegration matrix. The prior variance is to 1. See ’Details’.
ssvs optional; a named list of prior specifications for the SSVS algorithm. Not al-
lowed for TVP models. See ’Details’.
bvs optional; a named list of prior specifications for the BVS algorithm. See ’De-
tails’.
... further arguments passed to or from other methods.
Details
The arguments of the function require named lists. Possible specifications are described in the
following. Note that it is important to specify the priors in the correct list. Otherwise, the provided
specification will be disregarded and default values will be used.
Argument coef contains the following elements
v_i a numeric specifying the prior precision of the coefficients. Default is 1.
v_i_det a numeric specifying the prior precision of coefficients that correspond to deterministic
terms. Default is 0.1.
const a character specifying the prior mean of coefficients, which correspond to the intercept. If
const = "mean", the means of the series of endogenous variables are used as prior means.
If const = "first", the first values of the series of endogenous variables are used as prior
means.
minnesota a list of length 4 containing parameters for the calculation of the Minnesota prior, where
the element names must be kappa0, kappa1, kappa2 and kappa3. For the endogenous variable
i the prior variance of the lth lag of regressor j is obtained as
κ0
for own lags of endogenous variables,
l2
κ0 κ1 σi2
for endogenous variables other than own lags,
l2 σj2
κ0 κ2 σi2
for exogenous variables,
(l + 1)2 σj2
κ0 κ3 σi2 for deterministic terms,
where σi is the residual standard deviation of variable i of an unrestricted LS estimate. For
exogenous variables σi is the sample standard deviation.
The function only provides priors for the non-cointegration part of the model. However, the
residual standard errors σi are based on an unrestricted LS regression of the endogenous vari-
ables on the error correction term and the non-cointegration regressors.
max_var a numeric specifying the maximum prior variance that is allowed for non-deterministic
coefficients.
shape a numeric specifying the prior shape parameter of the error term of the state equation. Only
used for models with time varying parameters. Default is 3.
rate a numeric specifying the prior rate parameter of the error term of the state equation. Only
used for models with time varying parameters. Default is 0.0001.
rate_det a numeric specifying the prior rate parameter of the error term of the state equation for
coefficients, which correspond to deterministic terms. Only used for models with time varying
parameters. Default is 0.01.
If minnesota is specified, elements v_i and v_i_det are ignored.
Argument coint can contain the following elements:
v_i numeric between 0 and 1 specifying the shrinkage of the cointegration space prior. Default is
0.
p_tau_i numeric of the diagonal elements of the inverse prior matrix of the central location of the
cointegration space sp(β). Default is 1.
shape an integer specifying the prior degrees of freedom of the error term of the state equation of
the loading matrix α. Default is 3.
rate a numeric specifying the prior variance of error term of the state equation of the loading
matrix α. Default is 0.0001.
rho a numeric specifying the autocorrelation coefficient of the state equation of β. It must be
smaller than 1. Default is 0.999. Note that in contrast to Koop et al. (2011) ρ is not drawn in
the Gibbs sampler of this package yet.
Argument sigma can contain the following elements:
df an integer or character specifying the prior degrees of freedom of the error term. Only used, if
the prior is inverse Wishart. Default is "k", which indicates the amount of endogenous vari-
ables in the respective model. "k + 3" can be used to set the prior to the amount of endogenous
variables plus 3. If an integer is provided, the degrees of freedom are set to this value in all
models. In all cases the rank r of the cointegration matrix is automatically added.
scale a numeric specifying the prior error variance of endogenous variables. Default is 1.
shape a numeric or character specifying the prior shape parameter of the error term. Only used,
if the prior is inverse gamma or if time varying volatilities are estimated. For models with
constant volatility the default is "k", which indicates the amount of endogenous variables in
the respective country model. "k + 3" can be used to set the prior to the amount of endogenous
variables plus 3. If a numeric is provided, the shape parameters are set to this value in all
models. For models with stochastic volatility this prior refers to the error variance of the state
equation.
rate a numeric specifying the prior rate parameter of the error term. Only used, if the prior is in-
verse gamma or if time varying volatilities are estimated. For models with stochastic volatility
this prior refers to the error variance of the state equation.
mu numeric of the prior mean of the initial state of the log-volatilities. Only used for models with
time varying volatility.
v_i numeric of the prior precision of the initial state of the log-volatilities. Only used for models
with time varying volatility.
sigma_h numeric of the initial draw for the variance of the log-volatilities. Only used for models
with time varying volatility.
covar logical indicating whether error covariances should be estimated. Only used in combina-
tion with an inverse gamma prior or stochastic volatility, for which shape and rate must be
specified.
df and scale must be specified for an inverse Wishart prior. shape and rate are required for an
inverse gamma prior. For structural models or models with stochastic volatility only a gamma prior
specification is allowed.
Argument ssvs can contain the following elements:
inprior a numeric between 0 and 1 specifying the prior probability of a variable to be included in
the model.
tau a numeric vector of two elements containing the prior standard errors of restricted variables
(τ0 ) as its first element and unrestricted variables (τ1 ) as its second.
semiautomatic an numeric vector of two elements containing the factors by which the standard
errors associated with an unconstrained least squares estimate of the model are multiplied to
obtain the prior standard errors of restricted (τ0 ) and unrestricted (τ1 ) variables, respectively.
This is the semiautomatic approach described in George et al. (2008).
covar logical indicating if SSVS should also be applied to the error covariance matrix as in George
et al. (2008).
exclude_det logical indicating if deterministic terms should be excluded from the SSVS algo-
rithm.
minnesota a numeric vector of length 4 containing parameters for the calculation of the Minnesota-
like inclusion priors. See below.
Either tau or semiautomatic must be specified.
The argument bvs can contain the following elements
inprior a numeric between 0 and 1 specifying the prior probability of a variable to be included in
the model.
covar logical indicating if BVS should also be applied to the error covariance matrix.
exclude_det logical indicating if deterministic terms should be excluded from the BVS algorithm.
minnesota a numeric vector of length 4 containing parameters for the calculation of the Minnesota-
like inclusion priors. See below.
If either ssvs$minnesota or bvs$minnesota is specified, prior inclusion probabilities are calcu-
lated in a Minnesota-like fashion as
κ1
l for own lags of endogenous variables,
κ2
l for other endogenous variables,
κ3
1+l for exogenous variables,
κ4 for deterministic variables,
for lag l with κ1 , κ2 , κ3 , κ4 as the first, second, third and forth element in ssvs$minnesota or
bvs$minnesota, respectively.
Value
A list of models.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press.
<NAME>., <NAME>., & <NAME>. (2008). Bayesian stochastic search for VAR model restrictions.
Journal of Econometrics, 142(1), 553–580. doi:10.1016/j.jeconom.2007.08.017
<NAME>., <NAME>., & <NAME>. (2010). Efficient posterior simulation for coin-
tegrated models with priors on the cointegration space. Econometric Reviews, 29(2), 224–242.
doi:10.1080/07474930903382208
<NAME>., <NAME>., & <NAME>. (2011). Bayesian inference in a time varying coin-
tegration model. Journal of Econometrics, 165(2), 210–220. doi:10.1016/j.jeconom.2011.07.007
<NAME>. (2013). VAR forecasting using Bayesian variable selection. Journal of Applied Econo-
metrics, 28(2), 204–230. doi:10.1002/jae.1271
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Examples
# Get data
data("e6")
# Create model
model <- gen_vec(e6, p = 4, r = 1,
const = "unrestricted", seasonal = "unrestricted",
iterations = 100, burnin = 10)
# Chosen number of iterations and burnin should be much higher.
# Add priors
model <- add_priors(model)
add_priors.dfmodel Add Priors to Dynamic Factor Model
Description
Adds prior specifications to a list of models, which was produced by function gen_dfm.
Usage
## S3 method for class 'dfmodel'
add_priors(
object,
lambda = list(v_i = 0.01),
sigma_u = list(shape = 5, rate = 4),
a = list(v_i = 0.01),
sigma_v = list(shape = 5, rate = 4),
...
)
Arguments
object a list, usually, the output of a call to gen_dfm.
lambda a named list of prior specifications for the factor loadings in the measurement
equation. For the default specification the diagonal elements of the inverse prior
variance-covariance matrix are set to 0.01. The variances need to be specified as
precisions, i.e. as inverses of the variances.
sigma_u a named list of prior specifications for the error variance-covariance matrix. See
’Details’.
a a named list of prior specifications for the coefficients of the transition equa-
tion. For the default specification the diagonal elements of the inverse prior
variance-covariance matrix are set to 0.01. The variances need to be specified as
precisions, i.e. as inverses of the variances.
sigma_v a named list of prior specifications for the error variance-covariance matrix. See
’Details’.
... further arguments passed to or from other methods.
Details
Argument lambda can only contain the element v_i, which is a numeric specifying the prior preci-
sion of the loading factors of the measurement equation. Default is 0.01.
The function assumes an inverse gamma prior for the errors of the measurement equation. Argument
sigma_u can contain the following elements:
shape a numeric or character specifying the prior shape parameter of the error terms of the mea-
surement equation. Default is 5.
rate a numeric specifying the prior rate parameter of the error terms of the measurement equation.
Default is 4.
Argument a can only contain the element v_i, which is a numeric specifying the prior precision of
the coefficients of the transition equation. Default is 0.01.
The function assumes an inverse gamma prior for the errors of the transition equation. Argument
sigma_v can contain the following elements:
shape a numeric or character specifying the prior shape parameter of the error terms of the transi-
tion equation. Default is 5.
rate a numeric specifying the prior rate parameter of the error terms of the transition equation.
Default is 4.
Value
A list of models.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press.
Lütkepohl, H. (2007). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Examples
# Load data
data("bem_dfmdata")
# Generate model data
model <- gen_dfm(x = bem_dfmdata, p = 1, n = 1,
iterations = 5000, burnin = 1000)
# Number of iterations and burnin should be much higher.
# Add prior specifications
model <- add_priors(model,
lambda = list(v_i = .01),
sigma_u = list(shape = 5, rate = 4),
a = list(v_i = .01),
sigma_v = list(shape = 5, rate = 4))
bem_dfmdata FRED-QD data
Description
The data set contains quarterly time series for 196 US macroeconomic variables from 1959Q3 to
2015Q3. It was produced from file "freddata_Q.csv" of the data sets associated with Chan, Koop,
Poirier and Tobias (2019). Raw data are available at https://web.ics.purdue.edu/~jltobias/
second_edition/Chapter18/code_for_exercise_4/freddata_Q.csv.
Usage
data("bem_dfmdata")
Format
A named time-series object with 225 rows and 196 variables. A detailed explanation of the variables
can be found in McCracken and Ng (2016).
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press.
<NAME>., & <NAME>. (2016). FRED-MD: A monthly database for macroeconomic research.
Journal of Business & Economic Statistics 34(4), 574-589. doi:10.1080/07350015.2015.1086655
bvar Bayesian Vector Autoregression Objects
Description
bvar is used to create objects of class "bvar".
A plot function for objects of class "bvar".
Forecasting a Bayesian VAR object of class "bvar" with credible bands.
Usage
bvar(
data = NULL,
exogen = NULL,
y,
x = NULL,
A0 = NULL,
A = NULL,
B = NULL,
C = NULL,
Sigma = NULL
)
## S3 method for class 'bvar'
plot(x, ci = 0.95, type = "hist", ...)
## S3 method for class 'bvar'
predict(object, ..., n.ahead = 10, new_x = NULL, new_d = NULL, ci = 0.95)
Arguments
data the original time-series object of endogenous variables.
exogen the original time-series object of unmodelled variables.
y a time-series object of endogenous variables with T observations, usually, a
result of a call to gen_var.
x an object of class "bvar", usually, a result of a call to draw_posterior.
A0 either a K 2 × S matrix of MCMC coefficient draws of structural parameters
or a named list, where element coeffs contains a K 2 × S matrix of MCMC
coefficient draws of structural parameters and element lambda contains the cor-
responding draws of inclusion parameters in case variable selection algorithms
were employed. For time varying parameter models the coefficient matrix must
be T K 2 × S. Draws of the error covariance matrix of the state equation can be
provided as a K 2 × S matrix in an additional list element.
A either a pK 2 × S matrix of MCMC coefficient draws of lagged endogenous
variables or a named list, where element coeffs contains a pK 2 × S matrix of
MCMC coefficient draws of lagged endogenous variables and element lambda
contains the corresponding draws of inclusion parameters in case variable selec-
tion algorithms were employed. For time varying parameter models the coeffi-
cient matrix must be pT K 2 × S. Draws of the error covariance matrix of the
state equation can be provided as a pK 2 × S matrix in an additional list element.
B either a ((1 + s)M K) × S matrix of MCMC coefficient draws of unmodelled,
non-deterministic variables or a named list, where element coeffs contains a
((1 + s)M K) × S matrix of MCMC coefficient draws of unmodelled, non-
deterministic variables and element lambda contains the corresponding draws of
inclusion parameters in case variable selection algorithms were employed. For
time varying parameter models the coefficient matrix must be (1+s)T M K ×S.
Draws of the error covariance matrix of the state equation can be provided as a
(1 + s)M K × S matrix in an additional list element.
C either a KN × S matrix of MCMC coefficient draws of deterministic terms
or a named list, where element coeffs contains a KN × S matrix of MCMC
coefficient draws of deterministic terms and element lambda contains the cor-
responding draws of inclusion parameters in case variable selection algorithms
were employed. For time varying parameter models the coefficient matrix must
be T KN × S. Draws of the error covariance matrix of the state equation can be
provided as a KN × S matrix in an additional list element.
Sigma a K 2 × S matrix of MCMC draws for the error variance-covariance matrix or a
named list, where element coeffs contains a K 2 × S matrix of MCMC draws
for the error variance-covariance matrix and element lambda contains the cor-
responding draws of inclusion parameters in case variable selection algorithms
were employed to the covariances. For time varying parameter models the co-
efficient matrix must be T K 2 × S. Draws of the error covariance matrix of the
state equation can be provided as a K 2 × S matrix in an additional list element.
ci a numeric between 0 and 1 specifying the probability mass covered by the cred-
ible intervals. Defaults to 0.95.
type either "hist" (default) for histograms, "trace" for a trace plot or "boxplot"
for a boxplot. Only used for parameter draws of constant coefficients.
... additional arguments.
object an object of class "bvar", usually, a result of a call to bvar or bvec_to_bvar.
n.ahead number of steps ahead at which to predict.
new_x an object of class ts of new non-deterministic, exogenous variables. The object
must have the same frequency as the time series in object[["x"]] and must
contain at least all necessary observations for the predicted period.
new_d a matrix of new deterministic variables. Must have n.ahead rows.
Details
For the VARX model
Xp X s
A0 yt = Ai yt−i + Bi xt−i + Cdt + ut
i=1 i=0
the function collects the S draws of a Gibbs sampler (after the burn-in phase) in a standardised
object, where yt is a K-dimensional vector of endogenous variables, A0 is a K × K matrix of
structural coefficients. Ai is a K ×K coefficient matrix of lagged endogenous variabels. xt is an M-
dimensional vector of unmodelled, non-deterministic variables and Bi its corresponding coefficient
matrix. dt is an N-dimensional vector of deterministic terms and C its corresponding coefficient
matrix. ut is an error term with ut ∼ N (0, Σu ).
For time varying parameter and stochastic volatility models the respective coefficients and error
covariance matrix of the above model are assumed to be time varying, respectively.
The draws of the different coefficient matrices provided in A0, A, B, C and Sigma have to correspond
to the same MCMC iterations.
For the VAR model
Xp Xs
A0 yt = Ai yt−i + Bi xt−i + CDt + ut ,
i=1 i=0
with ut ∼ N (0, Σ) the function produces n.ahead forecasts.
Value
An object of class "bvar" containing the following components, if specified:
data the original time-series object of endogenous variables.
exogen the original time-series object of unmodelled variables.
y a K × T matrix of endogenous variables.
x a (pK + (1 + s)M + N ) × T matrix of regressor variables.
A0 an S × K 2 "mcmc" object of coefficient draws of structural parameters. In case
of time varying parameters a list of such objects.
A0_lambda an S × K 2 "mcmc" object of inclusion parameters for structural parameters.
A0_sigma an S × K 2 "mcmc" object of the error covariance matrices of the structural
parameters in a model with time varying parameters.
A an S ×pK 2 "mcmc" object of coefficient draws of lagged endogenous variables.
In case of time varying parameters a list of such objects.
A_lambda an S × pK 2 "mcmc" object of inclusion parameters for lagged endogenous vari-
ables.
A_sigma an S × pK 2 "mcmc" object of the error covariance matrices of coefficients of
lagged endogenous variables in a model with time varying parameters.
B an S × ((1 + s)M K) "mcmc" object of coefficient draws of unmodelled, non-
deterministic variables. In case of time varying parameters a list of such objects.
B_lambda an S × ((1 + s)M K) "mcmc" object of inclusion parameters for unmodelled,
non-deterministic variables.
B_sigma an S × ((1 + s)M K) "mcmc" object of the error covariance matrices of coeffi-
cients of unmodelled, non-deterministic variables in a model with time varying
parameters.
C an S × N K "mcmc" object of coefficient draws of deterministic terms. In case
of time varying parameters a list of such objects.
C_lambda an S × N K "mcmc" object of inclusion parameters for deterministic terms.
C_sigma an S × N K "mcmc" object of the error covariance matrices of coefficients of
deterministic terms in a model with time varying parameters.
Sigma an S × K 2 "mcmc" object of variance-covariance draws. In case of time varying
parameters a list of such objects.
Sigma_lambda an S × K 2 "mcmc" object of inclusion parameters for error covariances.
Sigma_sigma an S × K 2 "mcmc" object of the error covariance matrices of the coefficients of
the error covariance matrix of the measurement equation of a model with time
varying parameters.
specifications a list containing information on the model specification.
A time-series object of class "bvarprd".
References
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Examples
# Get data
data("e1")
e1 <- diff(log(e1))
e1 <- window(e1, end = c(1978, 4))
# Generate model data
data <- gen_var(e1, p = 2, deterministic = "const")
# Add priors
model <- add_priors(data,
coef = list(v_i = 0, v_i_det = 0),
sigma = list(df = 0, scale = .00001))
# Set RNG seed for reproducibility
set.seed(1234567)
iterations <- 400 # Number of iterations of the Gibbs sampler
# Chosen number of iterations and burnin should be much higher.
burnin <- 100 # Number of burn-in draws
draws <- iterations + burnin # Total number of MCMC draws
y <- t(model$data$Y)
x <- t(model$data$Z)
tt <- ncol(y) # Number of observations
k <- nrow(y) # Number of endogenous variables
m <- k * nrow(x) # Number of estimated coefficients
# Priors
a_mu_prior <- model$priors$coefficients$mu # Vector of prior parameter means
a_v_i_prior <- model$priors$coefficients$v_i # Inverse of the prior covariance matrix
u_sigma_df_prior <- model$priors$sigma$df # Prior degrees of freedom
u_sigma_scale_prior <- model$priors$sigma$scale # Prior covariance matrix
u_sigma_df_post <- tt + u_sigma_df_prior # Posterior degrees of freedom
# Initial values
u_sigma_i <- diag(1 / .00001, k)
# Data containers for posterior draws
draws_a <- matrix(NA, m, iterations)
draws_sigma <- matrix(NA, k^2, iterations)
# Start Gibbs sampler
for (draw in 1:draws) {
# Draw conditional mean parameters
a <- post_normal(y, x, u_sigma_i, a_mu_prior, a_v_i_prior)
# Draw variance-covariance matrix
u <- y - matrix(a, k) %*% x # Obtain residuals
u_sigma_scale_post <- solve(u_sigma_scale_prior + tcrossprod(u))
u_sigma_i <- matrix(rWishart(1, u_sigma_df_post, u_sigma_scale_post)[,, 1], k)
# Store draws
if (draw > burnin) {
draws_a[, draw - burnin] <- a
draws_sigma[, draw - burnin] <- solve(u_sigma_i)
}
}
# Generate bvar object
bvar_est <- bvar(y = model$data$Y, x = model$data$Z,
A = draws_a[1:18,], C = draws_a[19:21, ],
Sigma = draws_sigma)
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Generate model
model <- gen_var(e1, p = 1, deterministic = 2,
iterations = 100, burnin = 10)
# Chosen number of iterations and burn-in should be much higher.
# Add priors
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
# Plot draws
plot(object)
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
e1 <- window(e1, end = c(1978, 4))
# Generate model data
model <- gen_var(e1, p = 0, deterministic = "const",
iterations = 100, burnin = 10)
# Chosen number of iterations and burnin should be much higher.
# Add prior specifications
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
# Generate forecasts
bvar_pred <- predict(object, n.ahead = 10, new_d = rep(1, 10))
# Plot forecasts
plot(bvar_pred)
bvarpost Posterior Simulation for BVAR Models
Description
Produces draws from the posterior distributions of Bayesian VAR models.
Usage
bvarpost(object)
Arguments
object an object of class "bvarmodel", usually, a result of a call to gen_var in combi-
nation with add_priors.
Details
The function implements commonly used posterior simulation algorithms for Bayesian VAR models
with both constant and time varying parameters (TVP) as well as stochastic volatility. It can produce
posterior draws for standard BVAR models with independent normal-Wishart priors, which can be
augmented by stochastic search variable selection (SSVS) as proposed by Geroge et al. (2008) or
Bayesian variable selection (BVS) as proposed in Korobilis (2013). Both SSVS or BVS can also be
applied to the covariances of the error term.
The implementation follows the descriptions in Chan et al. (2019), George et al. (2008) and Koro-
bilis (2013). For all approaches the SUR form of a VAR model is used to obtain posterior draws.
The algorithm is implemented in C++ to reduce calculation time.
The function also supports structural BVAR models, where the structural coefficients are estimated
from contemporary endogenous variables, which corresponds to the so-called (A-model). Currently,
only specifications are supported, where the structural matrix contains ones on its diagonal and all
lower triangular elements are freely estimated. Since posterior draws are obtained based on the SUR
form of the VAR model, the structural coefficients are drawn jointly with the other coefficients.
Value
An object of class "bvar".
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press.
<NAME>., <NAME>., & <NAME>. (2008). Bayesian stochastic search for VAR model restrictions.
Journal of Econometrics, 142(1), 553–580. doi:10.1016/j.jeconom.2007.08.017
Korobilis, D. (2013). VAR forecasting using Bayesian variable selection. Journal of Applied Econo-
metrics, 28(2), 204–230. doi:10.1002/jae.1271
Examples
# Get data
data("e1")
e1 <- diff(log(e1)) * 100
# Create model
model <- gen_var(e1, p = 2, deterministic = "const",
iterations = 50, burnin = 10)
# Number of iterations and burnin should be much higher.
# Add priors
model <- add_priors(model)
# Obtain posterior draws
object <- bvarpost(model)
bvec Bayesian Vector Error Correction Objects
Description
‘bvec‘ is used to create objects of class "bvec".
A plot function for objects of class "bvec".
Usage
bvec(
y,
alpha = NULL,
beta = NULL,
beta_x = NULL,
beta_d = NULL,
r = NULL,
Pi = NULL,
Pi_x = NULL,
Pi_d = NULL,
w = NULL,
w_x = NULL,
w_d = NULL,
Gamma = NULL,
Upsilon = NULL,
C = NULL,
x = NULL,
x_x = NULL,
x_d = NULL,
A0 = NULL,
Sigma = NULL,
data = NULL,
exogen = NULL
)
## S3 method for class 'bvec'
plot(x, ci = 0.95, type = "hist", ...)
Arguments
y a time-series object of differenced endogenous variables, usually, a result of a
call to gen_vec.
alpha a Kr × S matrix of MCMC coefficient draws of the loading matrix α.
beta a Kr × S matrix of MCMC coefficient draws of cointegration matrix β corre-
sponding to the endogenous variables of the model.
beta_x a M r × S matrix of MCMC coefficient draws of cointegration matrix β corre-
sponding to unmodelled, non-deterministic variables.
beta_d a N R r × S matrix of MCMC coefficient draws of cointegration matrix β corre-
sponding to restricted deterministic terms.
r an integer of the rank of the cointegration matrix.
Pi a K 2 × S matrix of MCMC coefficient draws of endogenous varaibles in the
cointegration matrix.
Pi_x a KM ×S matrix of MCMC coefficient draws of unmodelled, non-deterministic
variables in the cointegration matrix.
Pi_d a KN R ×S matrix of MCMC coefficient draws of restricted deterministic terms.
w a time-series object of lagged endogenous variables in levels, which enter the
cointegration term, usually, a result of a call to gen_vec.
w_x a time-series object of lagged unmodelled, non-deterministic variables in levels,
which enter the cointegration term, usually, a result of a call to gen_vec.
w_d a time-series object of deterministic terms, which enter the cointegration term,
usually, a result of a call to gen_vec.
Gamma a (p − 1)K 2 × S matrix of MCMC coefficient draws of differenced lagged
endogenous variables or a named list, where element coeffs contains a (p −
1)K 2 × S matrix of MCMC coefficient draws of lagged differenced endogenous
variables and element lambda contains the corresponding draws of inclusion
parameters in case variable selection algorithms were employed.
Upsilon an sM K × S matrix of MCMC coefficient draws of differenced unmodelled,
non-deterministic variables or a named list, where element coeffs contains a
sM K ×S matrix of MCMC coefficient draws of unmodelled, non-deterministic
variables and element lambda contains the corresponding draws of inclusion
parameters in case variable selection algorithms were employed.
C an KN U R × S matrix of MCMC coefficient draws of unrestricted deterministic
terms or a named list, where element coeffs contains a KN U R × S matrix
of MCMC coefficient draws of deterministic terms and element lambda con-
tains the corresponding draws of inclusion parameters in case variable selection
algorithms were employed.
x an object of class "bvec", usually, a result of a call to draw_posterior.
x_x a time-series object of M s differenced unmodelled regressors.
x_d a time-series object of N U R deterministic terms that do not enter the cointegra-
tion term.
A0 either a K 2 × S matrix of MCMC coefficient draws of structural parameters
or a named list, where element coeffs contains a K 2 × S matrix of MCMC
coefficient draws of structural parameters and element lambda contains the cor-
responding draws of inclusion parameters in case variable selection algorithms
were employed.
Sigma a K 2 × S matrix of MCMC draws for the error variance-covariance matrix or a
named list, where element coeffs contains a K 2 × S matrix of MCMC draws
for the error variance-covariance matrix and element lambda contains the cor-
responding draws of inclusion parameters in case variable selection algorithms
were employed to the covariances.
data the original time-series object of endogenous variables.
exogen the original time-series object of unmodelled variables.
ci interval used to calculate credible bands for time-varying parameters.
type either "hist" (default) for histograms, "trace" for a trace plot or "boxplot"
for a boxplot. Only used for parameter draws of constant coefficients.
... further graphical parameters.
Details
For the vector error correction model with unmodelled exogenous variables (VECX)
yt−1 p−1
X s−1
X
A0 ∆yt = Π+ xt−1 + Γi ∆yt−i + Υi ∆xt−i + C U R dU t
R
+ ut
R
dt−1 i=1 i=0
the function collects the S draws of a Gibbs sampler in a standardised object, where ∆yt is a K-
dimensional vector of differenced endogenous variables and A0 is a K × K matrix of structural
coefficients. Π+ = Π, Πx , Πd is the coefficient matrix of the error correction term, where yt−1 ,
xt−1 and dR t−1 are the first lags of endogenous, exogenous variables in levels and restricted deter-
ministic terms, respectively. Π, Πx , and Πd are the corresponding coefficient matrices, respectively.
Γi is a coefficient matrix of lagged differenced endogenous variabels. ∆xt is an M-dimensional
vector of unmodelled, non-deterministic variables and Υi its corresponding coefficient matrix. dt is
an N U R -dimensional vector of unrestricted deterministics and C U R the corresponding coefficient
matrix. ut is an error term with ut ∼ N (0, Σu ).
For time varying parameter and stochastic volatility models the respective coefficients and error
covariance matrix of the above model are assumed to be time varying, respectively.
The draws of the different coefficient matrices provided in alpha, beta, Pi, Pi_x, Pi_d, A0, Gamma,
Ypsilon, C and Sigma have to correspond to the same MCMC iteration.
Value
An object of class "gvec" containing the following components, if specified:
data the original time-series object of endogenous variables.
exogen the original time-series object of unmodelled variables.
y a time-series object of differenced endogenous variables.
w a time-series object of lagged endogenous variables in levels, which enter the
cointegration term.
w_x a time-series object of lagged unmodelled, non-deterministic variables in levels,
which enter the cointegration term.
w_d a time-series object of deterministic terms, which enter the cointegration term.
x a time-series object of K(p − 1) differenced endogenous variables
x_x a time-series object of M s differenced unmodelled regressors.
x_d a time-series object of N U R deterministic terms that do not enter the cointegra-
tion term.
A0 an S × K 2 "mcmc" object of coefficient draws of structural parameters. In case
of time varying parameters a list of such objects.
A0_lambda an S ×K 2 "mcmc" object of inclusion parameters for coefficients corresponding
to structural parameters.
A0_sigma an S × K 2 "mcmc" object of the error covariance matrices of the structural
parameters in a model with time varying parameters.
alpha an S × Kr "mcmc" object of coefficient draws of loading parameters. In case
of time varying parameters a list of such objects.
beta an S × ((K + M + N R )r) "mcmc" object of coefficient draws of cointegration
parameters corresponding to the endogenous variables of the model. In case of
time varying parameters a list of such objects.
beta_x an S ×KM "mcmc" object of coefficient draws of cointegration parameters cor-
responding to unmodelled, non-deterministic variables. In case of time varying
parameters a list of such objects.
beta_d an S × KN R "mcmc" object of coefficient draws of cointegration parameters
corresponding to restricted deterministic variables. In case of time varying pa-
rameters a list of such objects.
Pi an S × K 2 "mcmc" object of coefficient draws of endogenous variables in the
cointegration matrix. In case of time varying parameters a list of such objects.
Pi_x an S×KM "mcmc" object of coefficient draws of unmodelled, non-deterministic
variables in the cointegration matrix. In case of time varying parameters a list
of such objects.
Pi_d an S × KN R "mcmc" object of coefficient draws of restricted deterministic
variables in the cointegration matrix. In case of time varying parameters a list
of such objects.
Gamma an S × (p − 1)K 2 "mcmc" object of coefficient draws of differenced lagged
endogenous variables. In case of time varying parameters a list of such objects.
Gamma_lamba an S × (p − 1)K 2 "mcmc" object of inclusion parameters for coefficients cor-
responding to differenced lagged endogenous variables.
Gamma_sigma an S × (p − 1)K 2 "mcmc" object of the error covariance matrices of the coeffi-
cients of lagged endogenous variables in a model with time varying parameters.
Upsilon an S × sM K "mcmc" object of coefficient draws of differenced unmodelled,
non-deterministic variables. In case of time varying parameters a list of such
objects.
Upsilon_lambda an S×sM K "mcmc" object of inclusion parameters for coefficients correspond-
ing to differenced unmodelled, non-deterministic variables.
Upsilon_sigma an S × sM K "mcmc" object of the error covariance matrices of the coefficients
of unmodelled, non-deterministic variables in a model with time varying param-
eters.
C an S × KN U R "mcmc" object of coefficient draws of deterministic terms that
do not enter the cointegration term. In case of time varying parameters a list of
such objects.
C_lambda an S × KN U R "mcmc" object of inclusion parameters for coefficients corre-
sponding to deterministic terms, that do not enter the conintegration term.
C_sigma an S×KN U R "mcmc" object of the error covariance matrices of the coefficients
of deterministic terms, which do not enter the cointegration term, in a model
with time varying parameters.
Sigma an S × K 2 "mcmc" object of variance-covariance draws. In case of time varying
parameters a list of such objects.
Sigma_lambda an S × K 2 "mcmc" object inclusion parameters for the variance-covariance ma-
trix.
Sigma_sigma an S × K 2 "mcmc" object of the error covariance matrices of the coefficients of
the error covariance matrix of the measurement equation of a model with time
varying parameters.
specifications a list containing information on the model specification.
Examples
# Load data
data("e6")
# Generate model
data <- gen_vec(e6, p = 4, r = 1, const = "unrestricted", season = "unrestricted")
# Obtain data matrices
y <- t(data$data$Y)
w <- t(data$data$W)
x <- t(data$data$X)
# Reset random number generator for reproducibility
set.seed(1234567)
iterations <- 400 # Number of iterations of the Gibbs sampler
# Chosen number of iterations should be much higher, e.g. 30000.
burnin <- 100 # Number of burn-in draws
draws <- iterations + burnin
r <- 1 # Set rank
tt <- ncol(y) # Number of observations
k <- nrow(y) # Number of endogenous variables
k_w <- nrow(w) # Number of regressors in error correction term
k_x <- nrow(x) # Number of differenced regressors and unrestrictec deterministic terms
k_alpha <- k * r # Number of elements in alpha
k_beta <- k_w * r # Number of elements in beta
k_gamma <- k * k_x
# Set uninformative priors
a_mu_prior <- matrix(0, k_x * k) # Vector of prior parameter means
a_v_i_prior <- diag(0, k_x * k) # Inverse of the prior covariance matrix
v_i <- 0
p_tau_i <- diag(1, k_w)
u_sigma_df_prior <- r # Prior degrees of freedom
u_sigma_scale_prior <- diag(0, k) # Prior covariance matrix
u_sigma_df_post <- tt + u_sigma_df_prior # Posterior degrees of freedom
# Initial values
beta <- matrix(c(1, -4), k_w, r)
u_sigma_i <- diag(1 / .0001, k)
g_i <- u_sigma_i
# Data containers
draws_alpha <- matrix(NA, k_alpha, iterations)
draws_beta <- matrix(NA, k_beta, iterations)
draws_pi <- matrix(NA, k * k_w, iterations)
draws_gamma <- matrix(NA, k_gamma, iterations)
draws_sigma <- matrix(NA, k^2, iterations)
# Start Gibbs sampler
for (draw in 1:draws) {
# Draw conditional mean parameters
temp <- post_coint_kls(y = y, beta = beta, w = w, x = x, sigma_i = u_sigma_i,
v_i = v_i, p_tau_i = p_tau_i, g_i = g_i,
gamma_mu_prior = a_mu_prior,
gamma_v_i_prior = a_v_i_prior)
alpha <- temp$alpha
beta <- temp$beta
Pi <- temp$Pi
gamma <- temp$Gamma
# Draw variance-covariance matrix
u <- y - Pi %*% w - matrix(gamma, k) %*% x
u_sigma_scale_post <- solve(tcrossprod(u) +
v_i * alpha %*% tcrossprod(crossprod(beta, p_tau_i) %*% beta, alpha))
u_sigma_i <- matrix(rWishart(1, u_sigma_df_post, u_sigma_scale_post)[,, 1], k)
u_sigma <- solve(u_sigma_i)
# Update g_i
g_i <- u_sigma_i
# Store draws
if (draw > burnin) {
draws_alpha[, draw - burnin] <- alpha
draws_beta[, draw - burnin] <- beta
draws_pi[, draw - burnin] <- Pi
draws_gamma[, draw - burnin] <- gamma
draws_sigma[, draw - burnin] <- u_sigma
}
}
# Number of non-deterministic coefficients
k_nondet <- (k_x - 4) * k
# Generate bvec object
bvec_est <- bvec(y = data$data$Y, w = data$data$W,
x = data$data$X[, 1:6],
x_d = data$data$X[, 7:10],
Pi = draws_pi,
Gamma = draws_gamma[1:k_nondet,],
C = draws_gamma[(k_nondet + 1):nrow(draws_gamma),],
Sigma = draws_sigma)
# Load data
data("e6")
# Generate model
model <- gen_vec(data = e6, p = 2, r = 1, const = "unrestricted",
iterations = 20, burnin = 10)
# Chosen number of iterations and burn-in should be much higher.
# Add priors
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
# Plot draws
plot(object)
bvecpost Posterior Simulation for BVEC Models
Description
Produces draws from the posterior distributions of Bayesian VEC models.
Usage
bvecpost(object)
Arguments
object an object of class "bvecmodel", usually, a result of a call to gen_vec in combi-
nation with add_priors.
Details
The function implements posterior simulation algorithms proposed in Koop et al. (2010) and Koop
et al. (2011), which place identifying restrictions on the cointegration space. Both algorithms are
able to employ Bayesian variable selection (BVS) as proposed in Korobilis (2013). The algorithm
of Koop et al. (2010) is also able to employ stochastic search variable selection (SSVS) as proposed
by Geroge et al. (2008). Both SSVS and BVS can also be applied to the covariances of the error
term. However, the algorithms cannot be applied to cointegration related coefficients, i.e. to the
loading matrix α or the cointegration matrix beta.
The implementation primarily follows the description in Koop et al. (2010). Chan et al. (2019),
George et al. (2008) and Korobilis (2013) were used to implement the variable selection algorithms.
For all approaches the SUR form of a VEC model is used to obtain posterior draws. The algorithm
is implemented in C++ to reduce calculation time.
The function also supports structural BVEC models, where the structural coefficients are estimated
from contemporary endogenous variables, which corresponds to the so-called (A-model). Currently,
only specifications are supported, where the structural matrix contains ones on its diagonal and all
lower triangular elements are freely estimated. Since posterior draws are obtained based on the SUR
form of the VEC model, the structural coefficients are drawn jointly with the other coefficients. No
identifying restrictions are made regarding the cointegration matrix.
Value
An object of class "bvec".
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press.
<NAME>., <NAME>., & <NAME>. (2008). Bayesian stochastic search for VAR model restrictions.
Journal of Econometrics, 142(1), 553–580. doi:10.1016/j.jeconom.2007.08.017
<NAME>., <NAME>., & <NAME>. (2010). Efficient posterior simulation for coin-
tegrated models with priors on the cointegration space. Econometric Reviews, 29(2), 224–242.
doi:10.1080/07474930903382208
<NAME>., <NAME>., & <NAME>. (2011). Bayesian inference in a time varying coin-
tegration model. Journal of Econometrics, 165(2), 210–220. doi:10.1016/j.jeconom.2011.07.007
<NAME>. (2013). VAR forecasting using Bayesian variable selection. Journal of Applied Econo-
metrics, 28(2), 204–230. doi:10.1002/jae.1271
Examples
# Get data
data("e6")
# Create model
model <- gen_vec(e6, p = 4, r = 1,
const = "unrestricted", seasonal = "unrestricted",
iterations = 100, burnin = 10)
# Chosen number of iterations and burnin should be much higher.
# Add priors
model <- add_priors(model)
# Obtain posterior draws
object <- bvecpost(model)
bvec_to_bvar Transform a VEC Model to a VAR in Levels
Description
An object of class "bvec" is transformed to a VAR in level representation.
Usage
bvec_to_bvar(object)
Arguments
object an object of class "bvec".
Value
An object of class "bvar".
References
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Examples
# Load data
data("e6")
# Generate model
data <- gen_vec(e6, p = 4, r = 1, const = "unrestricted", season = "unrestricted")
# Obtain data matrices
y <- t(data$data$Y)
w <- t(data$data$W)
x <- t(data$data$X)
# Reset random number generator for reproducibility
set.seed(1234567)
iterations <- 100 # Number of iterations of the Gibbs sampler
# Chosen number of iterations should be much higher, e.g. 30000.
burnin <- 100 # Number of burn-in draws
draws <- iterations + burnin
r <- 1 # Set rank
tt <- ncol(y) # Number of observations
k <- nrow(y) # Number of endogenous variables
k_w <- nrow(w) # Number of regressors in error correction term
k_x <- nrow(x) # Number of differenced regressors and unrestrictec deterministic terms
k_alpha <- k * r # Number of elements in alpha
k_beta <- k_w * r # Number of elements in beta
k_gamma <- k * k_x
# Set uninformative priors
a_mu_prior <- matrix(0, k_x * k) # Vector of prior parameter means
a_v_i_prior <- diag(0, k_x * k) # Inverse of the prior covariance matrix
v_i <- 0
p_tau_i <- diag(1, k_w)
u_sigma_df_prior <- r # Prior degrees of freedom
u_sigma_scale_prior <- diag(0, k) # Prior covariance matrix
u_sigma_df_post <- tt + u_sigma_df_prior # Posterior degrees of freedom
# Initial values
beta <- matrix(c(1, -4), k_w, r)
u_sigma_i <- diag(1 / .0001, k)
g_i <- u_sigma_i
# Data containers
draws_alpha <- matrix(NA, k_alpha, iterations)
draws_beta <- matrix(NA, k_beta, iterations)
draws_pi <- matrix(NA, k * k_w, iterations)
draws_gamma <- matrix(NA, k_gamma, iterations)
draws_sigma <- matrix(NA, k^2, iterations)
# Start Gibbs sampler
for (draw in 1:draws) {
# Draw conditional mean parameters
temp <- post_coint_kls(y = y, beta = beta, w = w, x = x, sigma_i = u_sigma_i,
v_i = v_i, p_tau_i = p_tau_i, g_i = g_i,
gamma_mu_prior = a_mu_prior,
gamma_v_i_prior = a_v_i_prior)
alpha <- temp$alpha
beta <- temp$beta
Pi <- temp$Pi
gamma <- temp$Gamma
# Draw variance-covariance matrix
u <- y - Pi %*% w - matrix(gamma, k) %*% x
u_sigma_scale_post <- solve(tcrossprod(u) +
v_i * alpha %*% tcrossprod(crossprod(beta, p_tau_i) %*% beta, alpha))
u_sigma_i <- matrix(rWishart(1, u_sigma_df_post, u_sigma_scale_post)[,, 1], k)
u_sigma <- solve(u_sigma_i)
# Update g_i
g_i <- u_sigma_i
# Store draws
if (draw > burnin) {
draws_alpha[, draw - burnin] <- alpha
draws_beta[, draw - burnin] <- beta
draws_pi[, draw - burnin] <- Pi
draws_gamma[, draw - burnin] <- gamma
draws_sigma[, draw - burnin] <- u_sigma
}
}
# Number of non-deterministic coefficients
k_nondet <- (k_x - 4) * k
# Generate bvec object
bvec_est <- bvec(y = data$data$Y, w = data$data$W,
x = data$data$X[, 1:6],
x_d = data$data$X[, 7:10],
Pi = draws_pi,
Gamma = draws_gamma[1:k_nondet,],
C = draws_gamma[(k_nondet + 1):nrow(draws_gamma),],
Sigma = draws_sigma)
# Thin posterior draws
bvec_est <- thin(bvec_est, thin = 5)
# Transfrom VEC output to VAR output
bvar_form <- bvec_to_bvar(bvec_est)
bvs Bayesian Variable Selection
Description
bvs employs Bayesian variable selection as proposed by Korobilis (2013) to produce a vector of
inclusion parameters for the coefficient matrix of a VAR model.
Usage
bvs(y, z, a, lambda, sigma_i, prob_prior, include = NULL)
Arguments
y a K × T matrix of the endogenous variables.
z a KT × M matrix of explanatory variables.
a an M-dimensional vector of parameter draws. If time varying parameters are
used, an M × T coefficient matrix can be provided.
lambda an M × M inclusion matrix that should be updated.
sigma_i the inverse variance-covariance matrix. If the variance-covariance matrix is time
varying, a KT × K matrix can be provided.
prob_prior an M-dimensional vector of prior inclusion probabilities.
include an integer vector specifying the positions of variables, which should be included
in the BVS algorithm. If NULL (default), BVS will be applied to all variables.
Details
The function employs Bayesian variable selection as proposed by Korobilis (2013) to produce a
vector of inclusion parameters, which are the diagonal elements of the inclusion matrix Λ for the
VAR model
yt = Zt Λat + ut ,
where ut ∼ N (0, Σt ). yt is a K-dimensional vector of endogenous variables and Zt = x0t ⊗ IK is
a K × M matrix of regressors with xt as a vector of regressors.
Value
A matrix of inclusion parameters on its diagonal.
References
<NAME>. (2013). VAR forecasting using Bayesian variable selection. Journal of Applied Econo-
metrics, 28(2), 204–230. doi:10.1002/jae.1271
Examples
# Load data
data("e1")
data <- diff(log(e1)) * 100
# Generate model data
temp <- gen_var(data, p = 2, deterministic = "const")
y <- t(temp$data$Y)
z <- temp$data$SUR
tt <- ncol(y)
m <- ncol(z)
# Priors
a_mu_prior <- matrix(0, m)
a_v_i_prior <- diag(0.1, m)
# Prior for inclusion parameter
prob_prior <- matrix(0.5, m)
# Initial value of Sigma
sigma <- tcrossprod(y) / tt
sigma_i <- solve(sigma)
lambda <- diag(1, m)
z_bvs <- z %*% lambda
a <- post_normal_sur(y = y, z = z_bvs, sigma_i = sigma_i,
a_prior = a_mu_prior, v_i_prior = a_v_i_prior)
lambda <- bvs(y = y, z = z, a = a, lambda = lambda,
sigma_i = sigma_i, prob_prior = prob_prior)
dfm Bayesian Dynamic Factor Model Objects
Description
dfm is used to create objects of class "dfm".
A plot function for objects of class "dfm".
Usage
dfm(x, lambda = NULL, fac, sigma_u = NULL, a = NULL, sigma_v = NULL)
## S3 method for class 'dfm'
plot(x, ci = 0.95, ...)
Arguments
x an object of class "dfm", usually, a result of a call to dfm.
lambda an M N × S matrix of MCMC coefficient draws of factor loadings of the mea-
surement equation.
fac an N T × S matrix of MCMC draws of the factors in the transition equation,
where the first N rows correspond to the N factors in period 1 and the next N
rows to the factors in period 2 etc.
sigma_u an M × S matrix of MCMC draws for the error variances of the measurement
equation.
a a pN 2 × S matrix of MCMC coefficient draws of the transition equation.
sigma_v an N × S matrix of MCMC draws for the error variances of the transition equa-
tion.
ci interval used to calculate credible bands.
... further graphical parameters.
Details
The function produces a standardised object from S draws of a Gibbs sampler (after the burn-in
phase) for the dynamic factor model (DFM) with measurement equation
xt = λft + ut ,
where xt is an M × 1 vector of observed variables, ft is an N × 1 vector of unobserved factors and
λ is the corresponding M × N matrix of factor loadings. ut is an M × 1 error term.
The transition equation is
Xp
ft = Ai ft−i + vt ,
where Ai is an N × N coefficient matrix and vt is an N × 1 error term.
Value
An object of class "dfm" containing the following components, if specified:
x the standardised time-series object of observable variables.
lambda an S × M N "mcmc" object of draws of factor loadings of the measurement
equation.
factor an S × N T "mcmc" object of draws of factors.
sigma_u an S × M "mcmc" object of variance draws of the measurement equation.
a an S × pN 2 "mcmc" object of coefficient draws of the transition equation.
sigma_v an S × N "mcmc" object of variance draws of the transition equation.
specifications a list containing information on the model specification.
Examples
# Load data
data("bem_dfmdata")
# Generate model data
model <- gen_dfm(x = bem_dfmdata, p = 1, n = 1,
iterations = 20, burnin = 10)
# Number of iterations and burnin should be much higher.
# Add prior specifications
model <- add_priors(model,
lambda = list(v_i = .01),
sigma_u = list(shape = 5, rate = 4),
a = list(v_i = .01),
sigma_v = list(shape = 5, rate = 4))
# Obtain posterior draws
object <- dfmpost(model)
# Load data
data("bem_dfmdata")
# Generate model data
model <- gen_dfm(x = bem_dfmdata, p = 1, n = 1,
iterations = 20, burnin = 10)
# Number of iterations and burnin should be much higher.
# Add prior specifications
model <- add_priors(model,
lambda = list(v_i = .01),
sigma_u = list(shape = 5, rate = 4),
a = list(v_i = .01),
sigma_v = list(shape = 5, rate = 4))
# Obtain posterior draws
object <- draw_posterior(model)
# Plot factors
plot(object)
dfmpost Posterior Simulation for Dynamic Factor Models
Description
Produces draws from the posterior distributions of Bayesian dynamic factor models.
Usage
dfmpost(object)
Arguments
object an object of class "dfmodel", usually, a result of a call to gen_dfm in combina-
tion with add_priors.
Details
The function implements the posterior simulation algorithm for Bayesian dynamic factor models.
The implementation follows the description in Chan et al. (2019) and C++ is used to reduce calcu-
lation time.
Value
An object of class "dfm".
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press.
Examples
# Load data
data("bem_dfmdata")
# Generate model data
model <- gen_dfm(x = bem_dfmdata, p = 1, n = 1,
iterations = 20, burnin = 10)
# Number of iterations and burnin should be much higher.
# Add prior specifications
model <- add_priors(model,
lambda = list(v_i = .01),
sigma_u = list(shape = 5, rate = 4),
a = list(v_i = .01),
sigma_v = list(shape = 5, rate = 4))
# Obtain posterior draws
object <- dfmpost(model)
draw_posterior Posterior Simulation
Description
Forwards model input to posterior simulation functions. This is a generic function.
Usage
draw_posterior(object, ...)
Arguments
object a list of model specifications. Usually, the output of a call to gen_var, gen_vec
or gen_dfm in combination with add_priors.
... arguments passed forward to method.
draw_posterior.bvarmodel
Posterior Simulation
Description
Forwards model input to posterior simulation functions.
Usage
## S3 method for class 'bvarmodel'
draw_posterior(object, FUN = NULL, mc.cores = NULL, ...)
Arguments
object a list of model specifications, which should be passed on to function FUN. Usu-
ally, the output of a call to gen_var in combination with add_priors.
FUN the function to be applied to each model in argument object. If NULL (default),
the internal functions bvarpost is used.
mc.cores the number of cores to use, i.e. at most how many child processes will be run si-
multaneously. The option is initialized from environment variable MC_CORES
if set. Must be at least one, and parallelization requires at least two cores.
... further arguments passed to or from other methods.
Value
For multiple models a list of objects of class bvarlist. For a single model the object has the class
of the output of the applied posterior simulation function. In case the package’s own functions are
used, this will result in an object of class "bvar".
Examples
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Generate model
model <- gen_var(e1, p = 1:2, deterministic = 2,
iterations = 100, burnin = 10)
# Chosen number of iterations and burn-in should be much higher.
# Add priors
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
draw_posterior.bvecmodel
Posterior Simulation for Vector Error Correction Models
Description
Forwards model input to posterior simulation functions for vector error correction models.
Usage
## S3 method for class 'bvecmodel'
draw_posterior(object, FUN = NULL, mc.cores = NULL, ...)
Arguments
object a list of model specifications, which should be passed on to function FUN. Usu-
ally, the output of a call to gen_vec in combination with add_priors.
FUN the function to be applied to each list element in argument object. If NULL
(default), the internal function bvecpost is used.
mc.cores the number of cores to use, i.e. at most how many child processes will be run si-
multaneously. The option is initialized from environment variable MC_CORES
if set. Must be at least one, and parallelization requires at least two cores.
... further arguments passed to or from other methods.
Value
For multiple models a list of objects of class bvarlist. For a single model the object has the class
of the output of the applied posterior simulation function. In case the package’s own functions are
used, this will be "bvec".
References
<NAME>., <NAME>., & <NAME>. (2010). Efficient posterior simulation for coin-
tegrated models with priors on the cointegration space. Econometric Reviews, 29(2), 224–242.
doi:10.1080/07474930903382208
<NAME>., <NAME>., & <NAME>. (2011). Bayesian inference in a time varying coin-
tegration model. Journal of Econometrics, 165(2), 210–220. doi:10.1016/j.jeconom.2011.07.007
Examples
# Load data
data("e6")
e6 <- e6 * 100
# Generate model
model <- gen_vec(e6, p = 1, r = 1, const = "restricted",
iterations = 10, burnin = 10)
# Chosen number of iterations and burn-in should be much higher.
# Add priors
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
draw_posterior.dfmodel
Posterior Simulation
Description
Forwards model input to posterior simulation functions.
Usage
## S3 method for class 'dfmodel'
draw_posterior(object, FUN = NULL, mc.cores = NULL, ...)
Arguments
object a list of model specifications, which should be passed on to function FUN. Usu-
ally, the output of a call to gen_var, gen_vec or gen_dfm in combination with
add_priors.
FUN the function to be applied to each list element in argument object. If NULL
(default), the internal functions bvarpost is used for VAR model, bvecpost for
VEC models and dfmpost for dynamic factor models.
mc.cores the number of cores to use, i.e. at most how many child processes will be run si-
multaneously. The option is initialized from environment variable MC_CORES
if set. Must be at least one, and parallelization requires at least two cores.
... further arguments passed to or from other methods.
Value
For multiple models a list of objects of class bvarlist. For a single model the object has the class
of the output of the applied posterior simulation function. In case the package’s own functions are
used, this will be "bvar", "bvec" or "dfm".
Examples
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Generate model
model <- gen_var(e1, p = 1:2, deterministic = 2,
iterations = 100, burnin = 10)
# Chosen number of iterations and burn-in should be much higher.
# Add priors
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
e1 West German economic time series data
Description
The data set contains quarterly, seasonally adjusted time series for West German fixed investment,
disposable income, and consumption expenditures in billions of DM from 1960Q1 to 1982Q4.
It was produced from file E1 of the data sets associated with Lütkepohl (2007). Raw data are
available at http://www.jmulti.de/download/datasets/e1.dat and were originally obtained
from Deutsche Bundesbank.
Usage
data("e1")
Format
A named time-series object with 92 rows and 3 variables:
invest fixed investment.
income disposable income.
cons consumption expenditures.
References
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
e6 German interest and inflation rate data
Description
The data set contains quarterly, seasonally unadjusted time series for German long-term interest
and inflation rates from 1972Q2 to 1998Q4. It was produced from file E6 of the data sets as-
sociated with Lütkepohl (2007). Raw data are available at http://www.jmulti.de/download/
datasets/e6.dat and were originally obtained from Deutsche Bundesbank and Deutsches Institut
für Wirtschaftsforschung.
Usage
data("e6")
Format
A named time-series object with 107 rows and 2 variables:
R nominal long-term interest rate (Umlaufsrendite).
Dp ∆ log of GDP deflator.
Details
The data cover West Germany until 1990Q2 and all of Germany aferwards. The values refer to the
last month of a quarter.
References
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
fevd Forecast Error Variance Decomposition A generic function used to
calculate forecast error varianc decompositions.
Description
A plot function for objects of class "bvarfevd".
Usage
fevd(object, ...)
## S3 method for class 'bvarfevd'
plot(x, ...)
Arguments
object an object of class "bvar".
... further graphical parameters.
x an object of class "bvarfevd", usually, a result of a call to fevd.
Examples
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Generate model data
model <- gen_var(e1, p = 2, deterministic = 2,
iterations = 100, burnin = 10)
# Chosen number of iterations and burnin should be much higher.
# Add prior specifications
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
# Obtain FEVD
vd <- fevd(object, response = "cons")
# Plot
plot(vd)
fevd.bvar Forecast Error Variance Decomposition
Description
Produces the forecast error variance decomposition of a Bayesian VAR model.
Usage
## S3 method for class 'bvar'
fevd(
object,
response = NULL,
n.ahead = 5,
type = "oir",
normalise_gir = FALSE,
period = NULL,
...
)
Arguments
object an object of class "bvar", usually, a result of a call to bvar or bvec_to_bvar.
response name of the response variable.
n.ahead number of steps ahead.
type type of the impulse responses used to calculate forecast error variable decompo-
sitions. Possible choices are orthogonalised oir (default) and generalised gir
impulse responses.
normalise_gir logical. Should the GIR-based FEVD be normalised?
period integer. Index of the period, for which the variance decomposition should be
generated. Only used for TVP or SV models. Default is NULL, so that the poste-
rior draws of the last time period are used.
... further arguments passed to or from other methods.
Details
The function produces forecast error variance decompositions (FEVD) for the VAR model
X p
A0 yt = Ai yt−i + ut ,
with ut ∼ N (0, Σ). For non-structural models matrix A0 is set to the identiy matrix and can
therefore be omitted, where not relevant.
If the FEVD is based on the orthogonalised impulse resonse (OIR), the FEVD will be calculated as
OIR i=0 (ej Φi P ek )
ωjk,h = Ph−1 ,
where Φi is the forecast error impulse response for the ith period, P is the lower triangular Choleski
decomposition of the variance-covariance matrix Σ, ej is a selection vector for the response variable
and ek a selection vector for the impulse variable.
If type = "sir", the structural FEVD will be calculated as
Ph−1 0
ωjk,h = Ph−1i=0
SIR
,
i=0 (ej Φi A0 A0 Φi ej )
where σjj is the diagonal element of the jth variable of the variance covariance matrix.
If type = "gir", the generalised FEVD will be calculated as
−1 Ph−1 0 2
GIR
σjj i=0 (ej Φi Σek )
ωjk,h = Ph−1 ,
where σjj is the diagonal element of the jth variable of the variance covariance matrix.
If type = "sgir", the structural generalised FEVD will be calculated as
−1 Ph−1 0 −1 2
SGIR
σjj i=0 (ej Φi A0 Σek )
ωjk,h = Ph−1 −1 −10 0
i=0 (ej Φi A0 ΣA0 Φi ej )
.
Since GIR-based FEVDs do not add up to unity, they can be normalised by setting normalise_gir
= TRUE.
Value
A time-series object of class "bvarfevd".
References
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
<NAME>., & <NAME>. (1998). Generalized impulse response analysis in linear multivariate
models. Economics Letters, 58, 17-29.
Examples
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Generate models
model <- gen_var(e1, p = 2, deterministic = 2,
iterations = 100, burnin = 10)
# Add priors
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
# Obtain FEVD
vd <- fevd(object, response = "cons")
# Plot FEVD
plot(vd)
gen_dfm Dynamic Factor Model Input
Description
gen_dfm produces the input for the estimation of a dynamic factor model (DFM).
Usage
gen_dfm(x, p = 2, n = 1, iterations = 50000, burnin = 5000)
Arguments
x a time-series object of stationary endogenous variables.
p an integer vector of the lag order of the measurement equation. See ’Details’.
n an integer vector of the number of factors. See ’Details’.
iterations an integer of MCMC draws excluding burn-in draws (defaults to 50000).
burnin an integer of MCMC draws used to initialize the sampler (defaults to 5000).
These draws do not enter the computation of posterior moments, forecasts etc.
Details
The function produces the variable matrices of dynamic factor models (DFM) with measurement
equation
xt = λft + ut ,
where xt is an M × 1 vector of observed variables, ft is an N × 1 vector of unobserved factors and
λ is the corresponding M × N matrix of factor loadings. ut is an M × 1 error term.
The transition equation is
Xp
ft = Ai ft−i + vt ,
where Ai is an N × N coefficient matrix and vt is an N × 1 error term.
If integer vectors are provided as arguments p or n, the function will produce a distinct model for
all possible combinations of those specifications.
Value
An object of class 'dfmodel', which contains the following elements:
data A list of data objects, which can be used for posterior simulation. Element X is a
time-series object of normalised observable variables, i.e. each column has zero
mean and unity variance.
model A list of model specifications.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian Econometric Methods (2nd ed.).
Cambridge: University Press.
Lütkepohl, H. (2007). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Examples
# Load data
data("bem_dfmdata")
# Generate model data
model <- gen_dfm(x = bem_dfmdata, p = 1, n = 1,
iterations = 5000, burnin = 1000)
gen_var Vector Autoregressive Model Input
Description
gen_var produces the input for the estimation of a vector autoregressive (VAR) model.
Usage
gen_var(
data,
p = 2,
exogen = NULL,
s = NULL,
deterministic = "const",
seasonal = FALSE,
structural = FALSE,
tvp = FALSE,
sv = FALSE,
fcst = NULL,
iterations = 50000,
burnin = 5000
)
Arguments
data a time-series object of endogenous variables.
p an integer vector of the lag order (default is p = 2).
exogen an optional time-series object of external regressors.
s an optional integer vector of the lag order of the external regressors (default is s
= 2).
deterministic a character specifying which deterministic terms should be included. Available
values are "none", "const" (default) for an intercept, "trend" for a linear trend,
and "both" for an intercept with a linear trend.
seasonal logical. If TRUE, seasonal dummy variables are generated as additional determin-
istic terms. The amount of dummies depends on the frequency of the time-series
object provided in data.
structural logical indicating whether data should be prepared for the estimation of a struc-
tural VAR model.
tvp logical indicating whether the model parameters are time varying.
sv logical indicating whether time varying error variances should be estimated by
employing a stochastic volatility algorithm.
fcst integer. Number of observations saved for forecasting evaluation.
iterations an integer of MCMC draws excluding burn-in draws (defaults to 50000).
burnin an integer of MCMC draws used to initialize the sampler (defaults to 5000).
These draws do not enter the computation of posterior moments, forecasts etc.
Details
The function produces the data matrices for vector autoregressive (VAR) models, which can also
include unmodelled, non-deterministic variables:
Xp Xs
A0 yt = Ai yt−i + Bi xt−i + CDt + ut ,
i=1 i=0
where yt is a K-dimensional vector of endogenous variables, A0 is a K × K coefficient matrix of
contemporaneous endogenous variables, Ai is a K × K coefficient matrix of endogenous variables,
xt is an M-dimensional vector of exogenous regressors and Bi its corresponding K × M coefficient
matrix. Dt is an N-dimensional vector of deterministic terms and C its corresponding K × N
coefficient matrix. p is the lag order of endogenous variables, s is the lag order of exogenous
variables, and ut is an error term.
If an integer vector is provided as argument p or s, the function will produce a distinct model for all
possible combinations of those specifications.
If tvp is TRUE, the respective coefficients of the above model are assumed to be time varying. If sv
is TRUE, the error covariance matrix is assumed to be time varying.
Value
An object of class 'bvarmodel', which contains the following elements:
data A list of data objects, which can be used for posterior simulation. Element Y is
a time-series object of dependent variables. Element Z is a time-series object of
the regressors and element SUR is the corresponding matrix of regressors in SUR
form.
model A list of model specifications.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian Econometric Methods (2nd ed.).
Cambridge: University Press.
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Examples
# Load data
data("e1")
e1 <- diff(log(e1))
# Generate model data
data <- gen_var(e1, p = 0:2, deterministic = "const")
gen_vec Vector Error Correction Model Input
Description
gen_vec produces the input for the estimation of a vector error correction (VEC) model.
Usage
gen_vec(
data,
p = 2,
exogen = NULL,
s = 2,
r = NULL,
const = NULL,
trend = NULL,
seasonal = NULL,
structural = FALSE,
tvp = FALSE,
sv = FALSE,
fcst = NULL,
iterations = 50000,
burnin = 5000
)
Arguments
data a time-series object of endogenous variables.
p an integer vector of the lag order of the series in the (levels) VAR. Thus, the
resulting model’s lag will be p − 1. See ’Details’.
exogen an optional time-series object of external regressors.
s an optional integer vector of the lag order of the exogenous variables of the
series in the (levels) VAR. Thus, the resulting model’s lag will be s − 1. See
’Details’.
r an integer vector of the cointegration rank. See ’Details’.
const a character specifying whether a constant term enters the error correction term
("restricted") or the non-cointegration term as an "unrestricted" variable.
If NULL (default) no constant term will be added.
trend a character specifying whether a trend term enters the error correction term
("restricted") or the non-cointegration term as an "unrestricted" variable.
If NULL (default) no constant term will be added.
seasonal a character specifying whether seasonal dummies should be included in the error
correction term ("restricted") or in the non-cointegreation term as "unrestricted"
variables. If NULL (default) no seasonal terms will be added. The amount of
dummy variables will be automatically detected and depends on the frequency
of the time-series object provided in data.
structural logical indicating whether data should be prepared for the estimation of a struc-
tural VAR model.
tvp logical indicating whether the model parameters are time varying.
sv logical indicating whether time varying error variances should be estimated by
employing a stochastic volatility algorithm.
fcst integer. Number of observations saved for forecasting evaluation.
iterations an integer of MCMC draws excluding burn-in draws (defaults to 50000).
burnin an integer of MCMC draws used to initialize the sampler (defaults to 5000).
These draws do not enter the computation of posterior moments, forecasts etc.
Details
The function produces the variable matrices of vector error correction (VEC) models, which can
also include exogenous variables:
p−1
X s−1
X
∆yt = Πwt + Γi ∆yt−i + Υi ∆xt−i + C U R dU t
R
+ ut ,
i=1 i=0
where ∆yt is a K × 1 vector of differenced endogenous variables, wt is a (K + M + N R ) × 1
vector of cointegration variables, Π is a K × (K + M + N R ) matrix of cointegration parameters,
Γi is a K × K coefficient matrix of endogenous variables, ∆xt is a M × 1 vector of differenced
exogenous regressors, Υi is a K × M coefficient matrix of exogenous regressors, dU t
R
UR UR
vector of deterministic terms, and C is a K × N coefficient matrix of deterministic terms that
do not enter the cointegration term. p is the lag order of endogenous variables and s is the lag order
of exogenous variables of the corresponding VAR model. ut is a K × 1 error term.
If an integer vector is provided as argument p, s or r, the function will produce a distinct model for
all possible combinations of those specifications.
If tvp is TRUE, the respective coefficients of the above model are assumed to be time varying. If sv
is TRUE, the error covariance matrix is assumed to be time varying.
Value
An object of class 'bvecmodel', which contains the following elements:
data A list of data objects, which can be used for posterior simulation. Element Y is
a time-series object of dependent variables. Element W is a timer-series object
of variables in the cointegration term and element X is a time-series object of
variables that do not enter the cointegration term. Element SUR contains a matrix
of element X in its SUR form.
model A list of model specifications.
References
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Examples
# Load data
data("e6")
# Generate model data
data <- gen_vec(e6, p = 4, const = "unrestricted", season = "unrestricted")
inclusion_prior Prior Inclusion Probabilities
Description
Prior inclusion probabilities as required for stochastic search variable selection (SSVS) à la George
et al. (2008) and Bayesian variable selection (BVS) à la Korobilis (2013).
Usage
inclusion_prior(
object,
prob = 0.5,
exclude_deterministics = TRUE,
minnesota_like = FALSE,
kappa = c(0.8, 0.5, 0.5, 0.8)
)
Arguments
object an object of class "bvarmodel", usually, a result of a call to gen_var or gen_vec.
prob a numeric specifying the prior inclusion probability of all model parameters.
exclude_deterministics
logical. If TRUE (default), the vector of the positions of included variables does
not include the positions of deterministic terms.
minnesota_like logical. If TRUE, the prior inclusion probabilities of the parameters are calculated
in a similar way as the Minnesota prior. See ’Details’.
kappa a numeric vector of four elements containing the prior inclusion probabilities of
coefficients that correspond to own lags of endogenous variables, to endogenous
variables, which do not correspond to own lags, to exogenous variables and
deterministic terms, respectively. Only used if minnesota_like = TRUE. See
’Details’.
Details
If minnesota_like = TRUE, prior inclusion probabilities π 1 are calculated as
κ1
r for own lags of endogenous variables,
κ2
r for other endogenous variables,
κ3
1+r for exogenous variables,
κ4 for deterministic variables,
for lag r with κ1 , κ2 , κ3 , κ4 as the first, second, third and forth element in kappa, respectively.
For vector error correction models the function generates prior inclusion probabilities for differ-
enced variables and unrestricted deterministc terms as described above. For variables in the error
correction term prior inclusion probabilites are calculated as
κ1 fow own levels of endogenous variables,
κ2 for levels of other endogenous variables,
κ3 for levels of exogenous variables,
κ4 for deterministic variables.
Value
A list containing a matrix of prior inclusion probabilities and an integer vector specifying the posi-
tions of variables, which should be included in the variable selction algorithm.
References
<NAME>., <NAME>., & <NAME>. (2008). Bayesian stochastic search for VAR model restrictions.
Journal of Econometrics, 142(1), 553–580. doi:10.1016/j.jeconom.2007.08.017
<NAME>. (2013). VAR forecasting using Bayesian variable selection. Journal of Applied Econo-
metrics, 28(2), 204–230. doi:10.1002/jae.1271
Examples
# Prepare data
data("e1")
# Generate model input
object <- gen_var(e1)
# Obtain inclusion prior
pi_prior <- inclusion_prior(object)
irf Impulse Response Function A generic function used to calculate im-
pulse response functions.
Description
A plot function for objects of class "bvarirf".
Usage
irf(x, ...)
## S3 method for class 'bvarirf'
plot(x, ...)
Arguments
x an object of class "bvarirf", usually, a result of a call to irf.
... further graphical parameters.
Examples
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Generate model data
model <- gen_var(e1, p = 2, deterministic = 2,
iterations = 100, burnin = 10)
# Number of iterations and burnin should be much higher.
# Add prior specifications
model <- add_priors(model)
# Optain posterior draws
object <- draw_posterior(model)
# Calculate IR
ir <- irf(object, impulse = "invest", response = "cons")
# Plot IR
plot(ir)
irf.bvar Impulse Response Function
Description
Computes the impulse response coefficients of an object of class "bvar" for n.ahead steps.
Usage
## S3 method for class 'bvar'
irf(
x,
impulse = NULL,
response = NULL,
n.ahead = 5,
ci = 0.95,
shock = 1,
type = "feir",
cumulative = FALSE,
keep_draws = FALSE,
period = NULL,
...
)
Arguments
x an object of class "bvar", usually, a result of a call to bvar or bvec_to_bvar.
impulse name of the impulse variable.
response name of the response variable.
n.ahead number of steps ahead.
ci a numeric between 0 and 1 specifying the probability mass covered by the cred-
ible intervals. Defaults to 0.95.
shock size of the shock.
type type of the impulse response. Possible choices are forecast error "feir" (de-
fault), orthogonalised "oir", structural "sir", generalised "gir", and structural
generalised "sgir" impulse responses.
cumulative logical specifying whether a cumulative IRF should be calculated.
keep_draws logical specifying whether the function should return all draws of the posterior
impulse response function. Defaults to FALSE so that the median and the credible
intervals of the posterior draws are returned.
period integer. Index of the period, for which the IR should be generated. Only used
for TVP or SV models. Default is NULL, so that the posterior draws of the last
time period are used.
... further arguments passed to or from other methods.
Details
The function produces different types of impulse responses for the VAR model
Xp
A0 yt = Ai yt−i + ut ,
with ut ∼ N (0, Σ).
Forecast error impulse responses Φi are obtained by recursions
X i
Φi = Φi−j Aj , i = 1, 2, ..., h
j=1
with Φ0 = IK .
Orthogonalised impulse responses Θoi are calculated as Θoi = Φi P , where P is the lower triangular
Choleski decomposition of Σ.
Structural impulse responses Θsi are calculated as Θsi = Φi A−1 0 .
(Structural) Generalised impulse responses for variable j, i.e. Θgj i are calculated as Θgji = σjj Φi A−1 0 Σej ,
th
where σjj is the variance of the j diagonal element of Σ and ei is a selection vector containing
one in its j th element and zero otherwise. If the "bvar" object does not contain draws of A0 , it is
assumed to be an identity matrix.
Value
A time-series object of class "bvarirf" and if keep_draws = TRUE a simple matrix.
References
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
<NAME>., <NAME>. (1998). Generalized impulse response analysis in linear multivariate mod-
els. Economics Letters, 58, 17-29.
Examples
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Generate model data
model <- gen_var(e1, p = 2, deterministic = 2,
iterations = 100, burnin = 10)
# Chosen number of iterations and burnin should be much higher.
# Add prior specifications
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
# Obtain IR
ir <- irf(object, impulse = "invest", response = "cons")
# Plot IR
plot(ir)
kalman_dk Durbin and Koopman Simulation Smoother
Description
An implementation of the Kalman filter and backward smoothing algorithm proposed by Durbin
and Koopman (2002).
Usage
kalman_dk(y, z, sigma_u, sigma_v, B, a_init, P_init)
Arguments
y a K × T matrix of endogenous variables.
z a KT × M matrix of explanatory variables.
sigma_u the constant K ×K error variance-covariance matrix. For time varying variance-
covariance matrices a KT × K can be specified.
sigma_v the constant M × M coefficient variance-covariance matrix. For time varying
variance-covariance matrices a M T × M can be specified.
B an M × M autocorrelation matrix of the transition equation.
a_init an M-dimensional vector of initial states.
P_init an M × M variance-covariance matrix of the initial states.
Details
The function uses algorithm 2 from Durbin and Koopman (2002) to produce a draw of the state
vector at for t = 1, ..., T for a state space model with measurement equation
yt = Zt at + ut
and transition equation
at+1 = Bt at + vt ,
where ut ∼ N (0, Σu,t ) and vt ∼ N (0, Σv,t ). yt is a K-dimensional vector of endogenous variables
and Zt = zt0 ⊗ IK is a K × M matrix of regressors with zt as a vector of regressors.
The algorithm takes into account Jarociński (2015), where a possible missunderstanding in the
implementation of the algorithm of Durbin and Koopman (2002) is pointed out. Following that
note the function sets the mean of the initial state to zero in the first step of the algorithm.
Value
A M × T + 1 matrix of state vector draws.
References
<NAME>., & <NAME>. (2002). A simple and efficient simulation smoother for state space
time series analysis. Biometrika, 89(3), 603–615.
<NAME>. (2015). A note on implementing the Durbin and Koopman simulation smoother.
Computational Statistics and Data Analysis, 91, 1–3. doi:10.1016/j.csda.2015.05.001
Examples
# Load data
data("e1")
data <- diff(log(e1))
# Generate model data
temp <- gen_var(data, p = 2, deterministic = "const")
y <- t(temp$data$Y)
z <- temp$data$SUR
k <- nrow(y)
tt <- ncol(y)
m <- ncol(z)
# Priors
a_mu_prior <- matrix(0, m)
a_v_i_prior <- diag(0.1, m)
a_Q <- diag(.0001, m)
# Initial value of Sigma
sigma <- tcrossprod(y) / tt
sigma_i <- solve(sigma)
# Initial values for Kalman filter
y_init <- y * 0
a_filter <- matrix(0, m, tt + 1)
# Initialise the Kalman filter
for (i in 1:tt) {
y_init[, i] <- y[, i] - z[(i - 1) * k + 1:k,] %*% a_filter[, i]
}
a_init <- post_normal_sur(y = y_init, z = z, sigma_i = sigma_i,
a_prior = a_mu_prior, v_i_prior = a_v_i_prior)
y_filter <- matrix(y) - z %*% a_init
y_filter <- matrix(y_filter, k) # Reshape
# Kalman filter and backward smoother
a_filter <- kalman_dk(y = y_filter, z = z, sigma_u = sigma,
sigma_v = a_Q, B = diag(1, m),
a_init = matrix(0, m), P_init = a_Q)
a <- a_filter + matrix(a_init, m, tt + 1)
loglik_normal Calculates the log-likelihood of a multivariate normal distribution.
Description
Calculates the log-likelihood of a multivariate normal distribution.
Usage
loglik_normal(u, sigma)
Arguments
u a K × T matrix of residuals.
sigma a K × K or KT × K variance-covariance matrix.
Details
The log-likelihood is calculated for each vector in period t as
K 1 1
− ln 2π − ln |Σt | − u0t Σ−1t ut
2 2 2
, where ut = yt − µt .
Examples
# Load data
data("e1")
e1 <- diff(log(e1))
# Generate VAR model
data <- gen_var(e1, p = 2, deterministic = "const")
y <- t(data$data$Y)
x <- t(data$data$Z)
# LS estimate
ols <- tcrossprod(y, x) %*% solve(tcrossprod(x))
# Residuals
u <- y - ols %*% x # Residuals
# Covariance matrix
sigma <- tcrossprod(u) / ncol(u)
# Log-likelihood
loglik_normal(u = u, sigma = sigma)
minnesota_prior Minnesota Prior
Description
Calculates the Minnesota prior for a VAR model.
Usage
minnesota_prior(
object,
kappa0 = 2,
kappa1 = 0.5,
kappa2 = NULL,
kappa3 = 5,
max_var = NULL,
coint_var = FALSE,
sigma = "AR"
)
Arguments
object an object of class "bvarmodel", usually, a result of a call to gen_var or gen_vec.
kappa0 a numeric specifying the prior variance of coefficients that correspond to own
lags of endogenous variables.
kappa1 a numeric specifying the size of the prior variance of endogenous variables,
which do not correspond to own lags, relative to argument kappa0.
kappa2 a numeric specifying the size of the prior variance of non-deterministic exoge-
nous variables relative to argument kappa0. Default is NULL, which indicates
that the formula for the calculation of the prior variance of deterministic terms
is used for all exogenous variables.
kappa3 a numeric specifying the size of the prior variance of deterministic terms relative
to argument kappa0.
max_var a positive numeric specifying the maximum prior variance that is allowed for
coefficients of non-deterministic variables. If NULL (default), the prior variances
are not limited.
coint_var a logical specifying whether the model is a cointegrated VAR model, for which
the prior means of first own lags should be set to one.
sigma either "AR" (default) or "VAR" indicating that the variances of the endogenous
variables σ 2 are calculated based on a univariate AR regression or a least squares
estimate of the VAR form, respectively. In both cases all deterministic variables
are used in the regressions, if they appear in the model.
Details
The function calculates the Minnesota prior of a VAR model. For the endogenous variable i the
prior variance of the lth lag of regressor j is obtained as
κ0
for own lags of endogenous variables,
l2
κ0 κ1 σi2
for endogenous variables other than own lags,
l2 σj2
κ0 κ2 σi2
for exogenous variables,
(l + 1)2 σj2
κ0 κ3 σi2 for deterministic terms,
where σi is the residual standard deviation of variable i of an unrestricted LS estimate. For exoge-
nous variables σi is the sample standard deviation.
For VEC models the function only provides priors for the non-cointegration part of the model. The
residual standard errors σi are based on an unrestricted LS regression of the endogenous variables
on the error correction term and the non-cointegration regressors.
Value
A list containing a matrix of prior means and the precision matrix of the cofficients and the inverse
variance-covariance matrix of the error term, which was obtained by an LS estimation.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2020). Bayesian Econometric Methods (2nd ed.).
Cambridge: University Press.
<NAME>. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Examples
# Load data
data("e1")
data <- diff(log(e1))
# Generate model input
object <- gen_var(data)
# Obtain Minnesota prior
prior <- minnesota_prior(object)
plot.bvarlist Plotting Posterior Draws of Bayesian VAR or VEC Models
Description
A plot function for objects of class "bvarlist".
Usage
## S3 method for class 'bvarlist'
plot(x, ci = 0.95, type = "hist", model = NULL, ...)
Arguments
x an object of class "bvarlist", usually, a result of a call to draw_posterior.
ci interval used to calculate credible bands for time-varying parameters.
type either "hist" (default) for histograms, "trace" for a trace plot, or "boxplot"
for a boxplot. Only used for parameter draws of constant coefficients.
model numeric or integer indicating for which models in argument "x" plots should be
produced.
... further graphical parameters.
plot.bvarprd Plotting Forecasts of BVAR Models
Description
A plot function for objects of class "bvarprd".
Usage
## S3 method for class 'bvarprd'
plot(x, n.pre = NULL, ...)
Arguments
x an object of class "bvarprd", usually, a result of a call to predict.bvar.
n.pre number of plotted observations that precede the forecasts. If NULL (default), all
available obervations will be plotted.
... further graphical parameters.
Examples
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Generate model data
model <- gen_var(e1, p = 2, deterministic = 2,
iterations = 100, burnin = 10)
# Add prior specifications
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
# Calculate forecasts
pred <- predict(object, new_d = rep(1, 10))
# Plot forecasts
plot(pred)
post_coint_kls Posterior Draw for Cointegration Models
Description
Produces a draw of coefficients for cointegration models with a prior on the cointegration space as
proposed in Koop et al. (2010) and a draw of non-cointegration coefficients from a normal density.
Usage
post_coint_kls(
y,
beta,
w,
sigma_i,
v_i,
p_tau_i,
g_i,
x = NULL,
gamma_mu_prior = NULL,
gamma_v_i_prior = NULL
)
Arguments
y a K × T matrix of differenced endogenous variables.
beta a M × r cointegration matrix β.
w a M × T matrix of variables in the cointegration term.
sigma_i an inverse of the K × K variance-covariance matrix.
v_i a numeric between 0 and 1 specifying the shrinkage of the cointegration space
prior.
p_tau_i an inverted M × M matrix specifying the central location of the cointegration
space prior of sp(β).
g_i a K × K matrix.
x a N × T matrix of differenced regressors and unrestricted deterministic terms.
gamma_mu_prior a KN × 1 prior mean vector of non-cointegration coefficients.
gamma_v_i_prior
an inverted KN ×KN prior covariance matrix of non-cointegration coefficients.
Details
The function produces posterior draws of the coefficient matrices α, β and Γ for the model
yt = αβ 0 wt−1 + Γzt + ut ,
where yt is a K-dimensional vector of differenced endogenous variables. wt is an M × 1 vector
of variables in the cointegration term, which include lagged values of endogenous and exogenous
variables in levels and restricted deterministic terms. zt is an N-dimensional vector of differenced
endogenous and exogenous explanatory variabes as well as unrestricted deterministic terms. The
error term is ut ∼ Σ.
Draws of the loading matrix α are obtained using the prior on the cointegration space as proposed
in Koop et al. (2010). The posterior covariance matrix is
v −1 (β 0 Pτ−1 β) ⊗ G−1 + ZZ 0 ⊗ Σ−1
Vα =
and the posterior mean by
α = V α + vec(Σ−1 Y Z 0 ),
where Y is a K ×T matrix of differenced endogenous variables and Z = β 0 W with W as an M ×T
matrix of variables in the cointegration term.
For a given prior mean vector Γ and prior covariance matrix VΓ the posterior covariance matrix of
non-cointegration coefficients in Γ is obtained by
V Γ = V −1 0
Γ + XX ⊗ Σ
and the posterior mean by
Γ = V Γ V −1 −1
Γ Γ + vec(Σ
where X is an M × T matrix of explanatory variables, which do not enter the cointegration term.
Draws of the cointegration matrix β are obtained using the prior on the cointegration space as
proposed in Koop et al. (2010). The posterior covariance matrix of the unrestricted cointegration
V B = A0 G−1 A ⊗ v −1 Pτ−1 + A0 Σ−1 A ⊗ W W 0
and the posterior mean by
B = V B + vec(W YB−1 Σ−1 A),
1
where YB = Y − ΓX and A = α(α0 α)− 2 .
The final draws of α and β are calculated using β = B(B 0 B)− 2 and α = A(B 0 B) 2 .
Value
A named list containing the following elements:
alpha a draw of the K × r loading matrix.
beta a draw of the M × r cointegration matrix.
Pi a draw of the K × M cointegration matrix Π = αβ 0 .
Gamma a draw of the K × N coefficient matrix for non-cointegration parameters.
References
<NAME>., <NAME>., & <NAME>. (2010). Efficient posterior simulation for coin-
tegrated models with priors on the cointegration space. Econometric Reviews, 29(2), 224-242.
doi:10.1080/07474930903382208
Examples
# Load data
data("e6")
# Generate model data
temp <- gen_vec(e6, p = 1, r = 1)
y <- t(temp$data$Y)
ect <- t(temp$data$W)
k <- nrow(y) # Endogenous variables
tt <- ncol(y) # Number of observations
# Initial value of Sigma
sigma <- tcrossprod(y) / tt
sigma_i <- solve(sigma)
# Initial values of beta
beta <- matrix(c(1, -4), k)
# Draw parameters
coint <- post_coint_kls(y = y, beta = beta, w = ect, sigma_i = sigma_i,
v_i = 0, p_tau_i = diag(1, k), g_i = sigma_i)
post_coint_kls_sur Posterior Draw for Cointegration Models
Description
Produces a draw of coefficients for cointegration models in SUR form with a prior on the cointe-
gration space as proposed in Koop et al. (2010) and a draw of non-cointegration coefficients from a
normal density.
Usage
post_coint_kls_sur(
y,
beta,
w,
sigma_i,
v_i,
p_tau_i,
g_i,
x = NULL,
gamma_mu_prior = NULL,
gamma_v_i_prior = NULL,
svd = FALSE
)
Arguments
y a K × T matrix of differenced endogenous variables.
beta a M × r cointegration matrix β, where β 0 β = I.
w a M × T matrix of variables in the cointegration term.
sigma_i the inverse of the constant K × K error variance-covariance matrix. For time
varying variance-covariance matrics a KT × K can be provided.
v_i a numeric between 0 and 1 specifying the shrinkage of the cointegration space
prior.
p_tau_i an inverted M × M matrix specifying the central location of the cointegration
space prior of sp(β).
g_i a K × K or KT × K matrix. If the matrix is KT × K, the function will
automatically produce a K × K matrix containing the means of the time varying
K × K covariance matrix.
x a KT × N K matrix of differenced regressors and unrestricted deterministic
terms.
gamma_mu_prior a KN × 1 prior mean vector of non-cointegration coefficients.
gamma_v_i_prior
an inverted KN ×KN prior covariance matrix of non-cointegration coefficients.
svd logical. If TRUE the singular value decomposition is used to determine the root
of the posterior covariance matrix. Default is FALSE which means that the eigen-
value decomposition is used.
Details
The function produces posterior draws of the coefficient matrices α, β and Γ for the model
yt = αβ 0 wt−1 + Γzt + ut ,
where yt is a K-dimensional vector of differenced endogenous variables. wt is an M × 1 vector
of variables in the cointegration term, which include lagged values of endogenous and exogenous
variables in levels and restricted deterministic terms. zt is an N-dimensional vector of differenced
endogenous and exogenous explanatory variabes as well as unrestricted deterministic terms. The
error term is ut ∼ Σ.
Draws of the loading matrix α are obtained using the prior on the cointegration space as proposed
in Koop et al. (2010). The posterior covariance matrix is
v −1 (β 0 Pτ−1 β) ⊗ G−1 + ZZ 0 ⊗ Σ−1
Vα =
and the posterior mean by
α = V α + vec(Σ−1 Y Z 0 ),
where Y is a K ×T matrix of differenced endogenous variables and Z = β 0 W with W as an M ×T
matrix of variables in the cointegration term.
For a given prior mean vector Γ and prior covariance matrix VΓ the posterior covariance matrix of
non-cointegration coefficients in Γ is obtained by
V Γ = V −1 0
Γ + XX ⊗ Σ
and the posterior mean by
Γ = V Γ V −1 −1
Γ Γ + vec(Σ
where X is an M × T matrix of explanatory variables, which do not enter the cointegration term.
Draws of the cointegration matrix β are obtained using the prior on the cointegration space as
proposed in Koop et al. (2010). The posterior covariance matrix of the unrestricted cointegration
matrix B is
V B = A0 G−1 A ⊗ v −1 Pτ−1 + A0 Σ−1 A ⊗ W W 0
and the posterior mean by
B = V B + vec(W YB−1 Σ−1 A),
where YB = Y − ΓX and A = α(α0 α)− 2 .
The final draws of α and β are calculated using β = B(B 0 B)− 2 and α = A(B 0 B) 2 .
Value
A named list containing the following elements:
alpha a draw of the K × r loading matrix.
beta a draw of the M × r cointegration matrix.
Pi a draw of the K × M cointegration matrix Π = αβ 0 .
Gamma a draw of the K × N coefficient matrix for non-cointegration parameters.
References
<NAME>., <NAME>., & <NAME>. (2010). Efficient posterior simulation for coin-
tegrated models with priors on the cointegration space. Econometric Reviews, 29(2), 224-242.
doi:10.1080/07474930903382208
Examples
# Load data
data("e6")
# Generate model data
temp <- gen_vec(e6, p = 1, r = 1)
y <- t(temp$data$Y)
ect <- t(temp$data$W)
k <- nrow(y) # Endogenous variables
tt <- ncol(y) # Number of observations
# Initial value of Sigma
sigma <- tcrossprod(y) / tt
sigma_i <- solve(sigma)
# Initial values of beta
beta <- matrix(c(1, -4), k)
# Draw parameters
coint <- post_coint_kls_sur(y = y, beta = beta, w = ect,
sigma_i = sigma_i, v_i = 0, p_tau_i = diag(1, nrow(ect)),
g_i = sigma_i)
post_normal Posterior Draw from a Normal Distribution
Description
Produces a draw of coefficients from a normal posterior density.
Usage
post_normal(y, x, sigma_i, a_prior, v_i_prior)
Arguments
y a K × T matrix of endogenous variables.
x an M × T matrix of explanatory variables.
sigma_i the inverse of the K × K variance-covariance matrix.
a_prior a KM × 1 numeric vector of prior means.
v_i_prior the inverse of the KM × KM prior covariance matrix.
Details
The function produces a vectorised posterior draw a of the K × M coefficient matrix A for the
model
yt = Axt + ut ,
where yt is a K-dimensional vector of endogenous variables, xt is an M-dimensional vector of
explanatory variabes and the error term is ut ∼ Σ.
For a given prior mean vector a and prior covariance matrix V the posterior covariance matrix is
obtained by
V = V −1 + XX 0 ⊗ Σ−1
and the posterior mean by
a = V V −1 a + vec(Σ−1 Y X 0 ) ,
where Y is a K ×T matrix of the endogenous variables and X is an M ×T matrix of the explanatory
variables.
Value
A vector.
References
Lütkepohl, H. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.
Examples
# Load data
data("e1")
data <- diff(log(e1))
# Generate model data
temp <- gen_var(data, p = 2, deterministic = "const")
y <- t(temp$data$Y)
x <- t(temp$data$Z)
k <- nrow(y)
tt <- ncol(y)
m <- k * nrow(x)
# Priors
a_mu_prior <- matrix(0, m)
a_v_i_prior <- diag(0.1, m)
# Initial value of inverse Sigma
sigma_i <- solve(tcrossprod(y) / tt)
# Draw parameters
a <- post_normal(y = y, x = x, sigma_i = sigma_i,
a_prior = a_mu_prior, v_i_prior = a_v_i_prior)
post_normal_sur Posterior Draw from a Normal Distribution
Description
Produces a draw of coefficients from a normal posterior density for a model with seemingly unre-
lated regresssions (SUR).
Usage
post_normal_sur(y, z, sigma_i, a_prior, v_i_prior, svd = FALSE)
Arguments
y a K × T matrix of endogenous variables.
z a KT × M matrix of explanatory variables.
sigma_i the inverse of the constant K × K error variance-covariance matrix. For time
varying variance-covariance matrics a KT × K can be provided.
a_prior a M x1 numeric vector of prior means.
v_i_prior the inverse of the M xM prior covariance matrix.
svd logical. If TRUE the singular value decomposition is used to determine the root
of the posterior covariance matrix. Default is FALSE which means that the eigen-
value decomposition is used.
Details
The function produces a posterior draw of the coefficient vector a for the model
yt = Zt a + ut ,
where ut ∼ N (0, Σt ). yt is a K-dimensional vector of endogenous variables and Zt = zt0 ⊗ IK is a
K × KM matrix of regressors with zt as a vector of regressors.
For a given prior mean vector a and prior covariance matrix V the posterior covariance matrix is
obtained by
" T
X
V = V −1 + Zt0 Σ−1
t Zt
and the posterior mean by
" T
#
X
−1
a=V V a+ Zt0 Σ−1
t yt .
Value
A vector.
Examples
# Load data
data("e1")
data <- diff(log(e1))
# Generate model data
temp <- gen_var(data, p = 2, deterministic = "const")
y <- t(temp$data$Y)
z <- temp$data$SUR
k <- nrow(y)
tt <- ncol(y)
m <- ncol(z)
# Priors
a_mu_prior <- matrix(0, m)
a_v_i_prior <- diag(0.1, m)
# Initial value of inverse Sigma
sigma_i <- solve(tcrossprod(y) / tt)
# Draw parameters
a <- post_normal_sur(y = y, z = z, sigma_i = sigma_i,
a_prior = a_mu_prior, v_i_prior = a_v_i_prior)
ssvs Stochastic Search Variable Selection
Description
ssvs employs stochastic search variable selection as proposed by George et al. (2008) to produce a
draw of the precision matrix of the coefficients in a VAR model.
Usage
ssvs(a, tau0, tau1, prob_prior, include = NULL)
Arguments
a an M-dimensional vector of coefficient draws.
tau0 an M-dimensional vector of prior standard deviations for restricted coefficients
in vector a.
tau1 an M-dimensional vector of prior standard deviations for unrestricted coeffi-
cients in vector a.
prob_prior an M-dimensional vector of prior inclusion probabilites for the coefficients in
vector a.
include an integer vector specifying the positions of coefficients in vector a, which
should be included in the SSVS algorithm. If NULL (default), SSVS will be
applied to all coefficients.
Details
The function employs stochastic search variable selection (SSVS) as proposed by George et al.
(2008) to produce a draw of the diagonal inverse prior covariance matrix V −1 and the corresponding
vector of inclusion parameters λ of the vectorised coefficient matrix a = vec(A) for the VAR model
yt = Axt + ut ,
where yt is a K-dimensional vector of endogenous variables, xt is a vector of explanatory variabes
and the error term is ut ∼ Σ.
Value
A named list containing two components:
v_i an M × M inverse prior covariance matrix.
lambda an M-dimensional vector of inclusion parameters.
References
<NAME>., <NAME>., & <NAME>. (2008). Bayesian stochastic search for VAR model restrictions.
Journal of Econometrics, 142(1), 553–580. doi:10.1016/j.jeconom.2007.08.017
Examples
# Load data
data("e1")
data <- diff(log(e1))
# Generate model data
temp <- gen_var(data, p = 2, deterministic = "const")
y <- t(temp$data$Y)
x <- t(temp$data$Z)
k <- nrow(y)
tt <- ncol(y)
m <- k * nrow(x)
# Obtain SSVS priors using the semiautomatic approach
priors <- ssvs_prior(temp, semiautomatic = c(0.1, 10))
tau0 <- priors$tau0
tau1 <- priors$tau1
# Prior for inclusion parameter
prob_prior <- matrix(0.5, m)
# Priors
a_mu_prior <- matrix(0, m)
a_v_i_prior <- diag(c(tau1^2), m)
# Initial value of Sigma
sigma_i <- solve(tcrossprod(y) / tt)
# Draw parameters
a <- post_normal(y = y, x = x, sigma_i = sigma_i,
a_prior = a_mu_prior, v_i_prior = a_v_i_prior)
# Run SSVS
lambda <- ssvs(a = a, tau0 = tau0, tau1 = tau1,
prob_prior = prob_prior)
ssvs_prior Stochastic Search Variable Selection Prior
Description
Calculates the priors for a Bayesian VAR model, which employs stochastic search variable selection
(SSVS).
Usage
ssvs_prior(object, tau = c(0.05, 10), semiautomatic = NULL)
Arguments
object an object of class "bvarmodel", usually, a result of a call to gen_var or gen_vec.
tau a numeric vector of two elements containing the prior standard errors of re-
stricted variables (τ0 ) as its first element and unrestricted variables (τ1 ) as its
second. Default is c(0.05, 10).
semiautomatic an optional numeric vector of two elements containing the factors by which the
standard errors associated with an unconstrained least squares estimate of the
VAR model are multiplied to obtain the prior standard errors of restricted (τ0 )
and unrestricted (τ1 ) variables. This is the semiautomatic approach described in
George et al. (2008).
Value
A list containing the vectors of prior standard deviations for restricted and unrestricted variables,
respectively.
References
<NAME>., <NAME>., & <NAME>. (2008). Bayesian stochastic search for VAR model restrictions.
Journal of Econometrics, 142(1), 553–580. doi:10.1016/j.jeconom.2007.08.017
Examples
# Prepare data
data("e1")
data <- diff(log(e1))
# Generate model input
object <- gen_var(data)
# Obtain SSVS prior
prior <- ssvs_prior(object, semiautomatic = c(.1, 10))
stochvol_ksc1998 Stochastic Volatility
Description
Produces a draw of log-volatilities.
Usage
stochvol_ksc1998(y, h, sigma, h_init, constant)
Arguments
y a T × 1 vector containing the time series.
h a T × 1 vector of log-volatilities.
sigma a numeric of the variance of the log-volatilites.
h_init a numeric of the initial state of log-volatilities.
constant a numeric of the constant that should be added to y 2 before taking the natural
logarithm. See ’Details’.
Details
The function produces a posterior draw of the log-volatility h for the model
yt = e 2 ht t ,
where t ∼ N (0, 1) and ht is assumed to evolve according to a random walk
ht = ht−1 + ut ,
with ut ∼ N (0, σ 2 ).
The implementation follows the algorithm of Kim, Shephard and Chip (1998) and performs the
following steps:
1. Perform the transformation yt∗ = ln(yt2 + constant).
2. Obtain a sample from the seven-component normal mixture for approximating the log-χ21
distribution.
3. Obtain a draw of log-volatilities.
The implementation follows the code provided on the website to the textbook by Chan, Koop,
Poirier, and Tobias (2019).
Value
A vector of log-volatility draws.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press.
<NAME>., <NAME>., & <NAME>. (1998). Stochastic volatility. Likelihood inference and com-
parison with ARCH models. Review of Economic Studies 65(3), 361–393. doi:10.1111/1467-
937X.00050
Examples
data("us_macrodata")
y <- us_macrodata[, "r"]
# Initialise log-volatilites
h_init <- log(var(y))
h <- rep(h_init, length(y))
# Obtain draw
stochvol_ksc1998(y - mean(y), h, .05, h_init, 0.0001)
stochvol_ocsn2007 Stochastic Volatility
Description
Produces a draw of log-volatilities based on Omori, Chib, Shephard and Nakajima (2007).
Usage
stochvol_ocsn2007(y, h, sigma, h_init, constant)
Arguments
y a T × 1 vector containing the time series.
h a T × 1 vector of log-volatilities.
sigma a numeric of the variance of the log-volatilites.
h_init a numeric of the initial state of log-volatilities.
constant a numeric of the constant that should be added to y 2 before taking the natural
logarithm. See ’Details’.
Details
The function produces a posterior draw of the log-volatility h for the model
where t ∼ N (0, 1) and ht is assumed to evolve according to a random walk
ht = ht−1 + ut ,
with ut ∼ N (0, σ 2 ).
The implementation follows the algorithm of Omori, Chib, Shephard and Nakajima (2007) and
performs the following steps:
1. Perform the transformation yt∗ = ln(yt2 + constant).
2. Obtain a sample from the ten-component normal mixture for approximating the log-χ21 distri-
bution.
3. Obtain a draw of log-volatilities.
Value
A vector of log-volatility draws.
References
Ch<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press.
Omori, Y., <NAME>., <NAME>., & <NAME>. (2007). Stochastic volatiltiy with leverage.
Fast and efficient likelihood inference. Journal of Econometrics 140(2), 425–449. doi:10.1016/
j.jeconom.2006.07.008
Examples
data("us_macrodata")
y <- us_macrodata[, "r"]
# Initialise log-volatilites
h_init <- log(var(y))
h <- rep(h_init, length(y))
# Obtain draw
stochvol_ocsn2007(y - mean(y), h, .05, h_init, 0.0001)
stoch_vol Stochastic Volatility
Description
Produces a draw of log-volatilities.
Usage
stoch_vol(y, h, sigma, h_init, constant)
Arguments
y a T × 1 vector containing the time series.
h a T × 1 vector of log-volatilities.
sigma a numeric of the variance of the log-volatilites.
h_init a numeric of the initial state of log-volatilities.
constant a numeric of the constant that should be added to y 2 before taking the natural
logarithm.
Details
The function is a wrapper for function stochvol_ksc1998.
Value
A vector of log-volatility draws.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press.
<NAME>., <NAME>., & <NAME>. (1998). Stochastic volatility. Likelihood inference and com-
parison with ARCH models. Review of Economic Studies 65(3), 361–393. doi:10.1111/1467-
937X.00050
Examples
data("us_macrodata")
y <- us_macrodata[, "r"]
# Initialise log-volatilites
h_init <- log(var(y))
h <- rep(h_init, length(y))
# Obtain draw
stoch_vol(y - mean(y), h, .05, h_init, 0.0001)
summary.bvar Summarising Bayesian VAR Coefficients
Description
summary method for class "bvar".
Usage
## S3 method for class 'bvar'
summary(object, ci = 0.95, period = NULL, ...)
## S3 method for class 'summary.bvar'
print(x, digits = max(3L, getOption("digits") - 3L), ...)
Arguments
object an object of class "bvar", usually, a result of a call to bvar or bvec_to_bvar.
ci a numeric between 0 and 1 specifying the probability of the credible band. De-
faults to 0.95.
period integer. Index of the period, for which the summary statistics should be gener-
ated. Only used for TVP or SV models. Default is NULL, so that the posterior
draws of the last time period are used.
... further arguments passed to or from other methods.
x an object of class "summary.bvar", usually, a result of a call to summary.bvar.
digits the number of significant digits to use when printing.
Value
summary.bvar returns a list of class "summary.bvar", which contains the following components:
coefficients A list of various summary statistics of the posterior draws of the VAR coeffi-
cients.
sigma A list of various summary statistics of the posterior draws of the variance-
covariance matrix.
specifications a list containing information on the model specification.
summary.bvarlist Summarising Bayesian VAR or VEC Models
Description
summary method for class "bvarlist".
Usage
## S3 method for class 'bvarlist'
summary(object, ...)
Arguments
object an object of class "bvar", usually, a result of a call to draw_posterior.
... further arguments passed to or from other methods.
Details
The log-likelihood for the calculation of the information criteria is obtained by
R T
!
1 X X K 1 (i) 1 (i)0 (i) (i)
LL = − ln 2π − ln |Σt | − (ut (Σt )−1 ut
R i=1 t=1 2 2 2
, where ut = yt − µt . The Akaike, Bayesian and Hannan–Quinn (HQ) information criteria are
calculated as
AIC = 2(Kp + M s + N ) − 2LL
,
BIC = (Kp + M s + N )ln(T ) − 2LL
and
HQ = 2(Kp + M s + N )ln(ln(T )) − 2LL
, respectively, where K is the number of endogenous variables, p the number of lags of endogenous
variables, M the number of exogenous variables, s the number of lags of exogenous variables, N
the number of deterministic terms and T the number of observations.
Value
summary.bvarlist returns a table of class "summary.bvarlist".
summary.bvec Summarising Bayesian VEC Coefficients
Description
summary method for class "bvec".
Usage
## S3 method for class 'bvec'
summary(object, ci = 0.95, period = NULL, ...)
## S3 method for class 'summary.bvec'
print(x, digits = max(3L, getOption("digits") - 3L), ...)
Arguments
object an object of class "bvec", usually, a result of a call to bvec.
ci a numeric between 0 and 1 specifying the probability of the credible band. De-
faults to 0.95.
period integer. Index of the period of a TVP VEC, for which a summary should be
generated. Only used for TVP models. Default is NULL so that only the most
recent time period is used.
... further arguments passed to or from other methods.
x an object of class "summary.bvec", usually, a result of a call to summary.bvec.
digits the number of significant digits to use when printing.
Value
summary.bvec returns a list of class "summary.bvec", which contains the following components:
coefficients A list of various summary statistics of the posterior draws of the VAR coeffi-
cients.
sigma A list of various summary statistics of the posterior draws of the variance-
covariance matrix.
specifications a list containing information on the model specification.
summary.dfm Summarising Bayesian Dynamic Factor Models
Description
summary method for class "dfm".
Usage
## S3 method for class 'dfm'
summary(object, ci = 0.95, ...)
Arguments
object an object of class "dfm", usually, a result of a call to dfm.
ci a numeric between 0 and 1 specifying the probability of the credible band. De-
faults to 0.95.
... further arguments passed to or from other methods.
Value
summary.dfm returns a list of class "summary.dfm", which contains the following components:
lambda A list of various summary statistics of the posterior draws of the factor loadings.
factor A list of various summary statistics of the posterior draws of the factors.
sigma_u A list of various summary statistics of the posterior draws of the variance matrix
of the measurement equation.
a A list of various summary statistics of the posterior draws of the factor loadings.
sigma_v A list of various summary statistics of the posterior draws of the variance matrix
of the transition equation.
specifications a list containing information on the model specification.
thin.bvar Thinning Posterior Draws
Description
Thins the MCMC posterior draws in an object of class "bvar".
Usage
## S3 method for class 'bvar'
thin(x, thin = 10, ...)
Arguments
x an object of class "bvar".
thin an integer specifying the thinning interval between successive values of posterior
draws.
... further arguments passed to or from other methods.
Value
An object of class "bvar".
Examples
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Obtain data matrices
model <- gen_var(e1, p = 2, deterministic = 2,
iterations = 100, burnin = 10)
# Chosen number of iterations and burn-in draws should be much higher.
# Add prior specifications
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
object <- thin(object)
thin.bvarlist Thinning Posterior Draws
Description
Thins the MCMC posterior draws in an object of class "bvarlist".
Usage
## S3 method for class 'bvarlist'
thin(x, thin = 10, ...)
Arguments
x an object of class "bvarlist".
thin an integer specifying the thinning interval between successive values of posterior
draws.
... further arguments passed to or from other methods.
Value
An object of class "bvarlist".
Examples
# Load data
data("e1")
e1 <- diff(log(e1)) * 100
# Generate multiple model matrices
model <- gen_var(e1, p = 1:2, deterministic = 2,
iterations = 100, burnin = 10)
# Add prior specifications
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
# Thin
object <- thin(object)
thin.bvec Thinning Posterior Draws
Description
Thins the MCMC posterior draws in an object of class "bvec".
Usage
## S3 method for class 'bvec'
thin(x, thin = 10, ...)
Arguments
x an object of class "bvec".
thin an integer specifying the thinning interval between successive values of posterior
draws.
... further arguments passed to or from other methods.
Value
An object of class "bvec".
Examples
# Load data
data("e6")
# Generate model data
model <- gen_vec(e6, p = 2, r = 1,
const = "unrestricted", seasonal = "unrestricted",
iterations = 100, burnin = 10)
# Add prior specifications
model <- add_priors(model)
# Obtain posterior draws
object <- draw_posterior(model)
# Thin
object <- thin(object)
thin.dfm Thinning Posterior Draws
Description
Thins the MCMC posterior draws in an object of class "dfm".
Usage
## S3 method for class 'dfm'
thin(x, thin = 10, ...)
Arguments
x an object of class "dfm".
thin an integer specifying the thinning interval between successive values of posterior
draws.
... further arguments passed to or from other methods.
Value
An object of class "dfm".
Examples
# Load data
data("bem_dfmdata")
# Generate model data
model <- gen_dfm(x = bem_dfmdata, p = 1, n = 1,
iterations = 20, burnin = 10)
# Number of iterations and burnin should be much higher.
# Add prior specifications
model <- add_priors(model,
lambda = list(v_i = .01),
sigma_u = list(shape = 5, rate = 4),
a = list(v_i = .01),
sigma_v = list(shape = 5, rate = 4))
# Obtain posterior draws
object <- draw_posterior(model)
# Plot factors
object <- thin(object, thin = 2)
us_macrodata US macroeconomic data
Description
The data set contains quarterly time series for the US CPI inflation rate, unemployment rate, and Fed
Funds rate from 1959Q2 to 2007Q4. It was produced from file "US_macrodata.csv" of the data sets
associated with Chan, Koop, Poirier and Tobias (2019). Raw data are available at https://web.
ics.purdue.edu/~jltobias/second_edition/Chapter20/code_for_exercise_1/US_macrodata.
csv.
Usage
data("us_macrodata")
Format
A named time-series object with 195 rows and 3 variables:
Dp CPI inflation rate.
u unemployment rate.
r Fed Funds rate.
References
<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Bayesian econometric methods (2nd ed.).
Cambridge: Cambridge University Press. |
DatabaseConnector | cran | R | Package ‘DatabaseConnector’
September 7, 2023
Type Package
Title Connecting to Various Database Platforms
Version 6.2.4
Date 2023-09-07
Description An R 'DataBase Interface' ('DBI') compatible interface to various database plat-
forms ('PostgreSQL', 'Oracle', 'Microsoft SQL Server',
'Amazon Redshift', 'Microsoft Parallel Database Warehouse', 'IBM Netezza', 'Apache Im-
pala', 'Google BigQuery', 'Snowflake', 'Spark', and 'SQLite'). Also includes support for
fetching data as 'Andromeda' objects. Uses either 'Java Database Connectiv-
ity' ('JDBC') or other 'DBI' drivers to connect to databases.
SystemRequirements Java (>= 8)
Depends R (>= 4.0.0)
Imports rJava,
SqlRender (>= 1.15.2),
methods,
stringr,
readr,
rlang,
utils,
DBI (>= 1.0.0),
urltools,
bit64,
checkmate,
digest,
dbplyr (>= 2.2.0)
Suggests aws.s3,
R.utils,
withr,
testthat,
DBItest,
knitr,
rmarkdown,
RSQLite,
ssh,
Andromeda,
dplyr,
RPostgres,
odbc,
duckdb,
pool,
ParallelLogger
License Apache License
VignetteBuilder knitr
URL https://ohdsi.github.io/DatabaseConnector/, https:
//github.com/OHDSI/DatabaseConnector
BugReports https://github.com/OHDSI/DatabaseConnector/issues
Copyright See file COPYRIGHTS
RoxygenNote 7.2.3
Roxygen list(markdown = TRUE)
Encoding UTF-8
R topics documented:
assertTempEmulationSchemaSe... 3
computeDataHas... 4
connec... 5
createConnectionDetail... 9
createDbiConnectionDetail... 12
createZipFil... 13
DatabaseConnectorDrive... 14
dateAd... 14
dateDif... 15
dateFromPart... 15
da... 16
dbAppendTable,DatabaseConnectorConnection,character-method . . . . . . . . . . . . 16
dbClearResult,DatabaseConnectorDbiResult-method . . . . . . . . . . . . . . . . . . . 18
dbClearResult,DatabaseConnectorJdbcResult-method . . . . . . . . . . . . . . . . . . . 18
dbColumnInfo,DatabaseConnectorDbiResult-method . . . . . . . . . . . . . . . . . . . 19
dbColumnInfo,DatabaseConnectorJdbcResult-method . . . . . . . . . . . . . . . . . . . 20
dbConnect,DatabaseConnectorDriver-method . . . . . . . . . . . . . . . . . . . . . . . 20
dbCreateTable,DatabaseConnectorConnection-method . . . . . . . . . . . . . . . . . . 21
dbDisconnect,DatabaseConnectorConnection-method . . . . . . . . . . . . . . . . . . . 23
dbExecute,DatabaseConnectorConnection,character-method . . . . . . . . . . . . . . . 23
dbExistsTable,DatabaseConnectorConnection,character-method . . . . . . . . . . . . . 24
dbFetch,DatabaseConnectorDbiResult-method . . . . . . . . . . . . . . . . . . . . . . . 25
dbFetch,DatabaseConnectorJdbcResult-method . . . . . . . . . . . . . . . . . . . . . . 26
dbGetInfo,DatabaseConnectorConnection-method . . . . . . . . . . . . . . . . . . . . . 27
dbGetInfo,DatabaseConnectorDriver-method . . . . . . . . . . . . . . . . . . . . . . . 28
dbGetQuery,DatabaseConnectorConnection,character-method . . . . . . . . . . . . . . 29
dbGetRowCount,DatabaseConnectorDbiResult-method . . . . . . . . . . . . . . . . . . 30
dbGetRowCount,DatabaseConnectorJdbcResult-method . . . . . . . . . . . . . . . . . 31
dbGetRowsAffected,DatabaseConnectorDbiResult-method . . . . . . . . . . . . . . . . 32
dbGetRowsAffected,DatabaseConnectorJdbcResult-method . . . . . . . . . . . . . . . 32
dbGetStatement,DatabaseConnectorDbiResult-method . . . . . . . . . . . . . . . . . . 33
dbGetStatement,DatabaseConnectorJdbcResult-method . . . . . . . . . . . . . . . . . . 34
dbHasCompleted,DatabaseConnectorDbiResult-method . . . . . . . . . . . . . . . . . . 34
dbHasCompleted,DatabaseConnectorJdbcResult-method . . . . . . . . . . . . . . . . . 35
dbIsValid,DatabaseConnectorDbiConnection-method . . . . . . . . . . . . . . . . . . . 36
dbIsValid,DatabaseConnectorJdbcConnection-method . . . . . . . . . . . . . . . . . . . 37
dbListFields,DatabaseConnectorConnection,character-method . . . . . . . . . . . . . . 38
dbListTables,DatabaseConnectorConnection-method . . . . . . . . . . . . . . . . . . . 39
dbm... 40
dbReadTable,DatabaseConnectorConnection,character-method . . . . . . . . . . . . . . 40
dbRemoveTable,DatabaseConnectorConnection,ANY-method . . . . . . . . . . . . . . 42
dbSendQuery,DatabaseConnectorDbiConnection,character-method . . . . . . . . . . . . 43
dbSendQuery,DatabaseConnectorJdbcConnection,character-method . . . . . . . . . . . 44
dbSendStatement,DatabaseConnectorConnection,character-method . . . . . . . . . . . . 45
dbUnloadDriver,DatabaseConnectorDriver-method . . . . . . . . . . . . . . . . . . . . 46
dbWriteTable,DatabaseConnectorConnection,ANY-method . . . . . . . . . . . . . . . . 47
disconnec... 48
downloadJdbcDriver... 49
dropEmulatedTempTable... 50
eoMont... 51
executeSq... 51
existsTabl... 52
extractQueryTime... 53
getAvailableJavaHeapSpac... 54
getTableName... 54
inDatabaseSchem... 55
insertTabl... 56
isSqlReservedWor... 58
jdbcDriver... 58
lowLevelExecuteSq... 59
lowLevelQuerySq... 59
lowLevelQuerySqlToAndromed... 60
mont... 61
querySq... 62
querySqlToAndromed... 63
renderTranslateExecuteSq... 64
renderTranslateQueryApplyBatche... 66
renderTranslateQuerySq... 68
renderTranslateQuerySqlToAndromeda . . . . . . . . . . . . . . . . . . . . . . . . . . 70
requiresTempEmulatio... 71
yea... 72
assertTempEmulationSchemaSet
Assert the temp emulation schema is set
Description
Asserts the temp emulation schema is set for DBMSs requiring temp table emulation.
If you know your code uses temp tables, it is a good idea to call this function first, so it can throw
an informative error if the user forgot to set the temp emulation schema.
Usage
assertTempEmulationSchemaSet(
dbms,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema")
)
Arguments
dbms The type of DBMS running on the server. See connect() or createConnectionDetails()
for valid values.
tempEmulationSchema
The temp emulation schema specified by the user.
Value
Does not return anything. Throws an error if the DBMS requires temp emulation but the temp
emulation schema is not set.
computeDataHash Compute hash of data
Description
Compute a hash of the data in the database schema. If the data changes, this should produce a
different hash code. Specifically, the hash is based on the field names, field types, and table row
counts.
Usage
computeDataHash(connection, databaseSchema, tables = NULL, progressBar = TRUE)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
databaseSchema The name of the database schema. See details for platform-specific details.
tables (Optional) A list of tables to restrict to.
progressBar When true, a progress bar is shown based on the number of tables in the database
schema.
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
A string representing the MD5 hash code.
connect connect
Description
Creates a connection to a database server .There are four ways to call this function:
• connect(dbms, user, password, server, port, extraSettings, oracleDriver, pathToDriver)
• connect(connectionDetails)
• connect(dbms, connectionString, pathToDriver))
• connect(dbms, connectionString, user, password, pathToDriver)
DBMS parameter details::
Depending on the DBMS, the function arguments have slightly different interpretations:
Oracle:
• user. The user name used to access the server
• password. The password for that user
• server. This field contains the SID, or host and servicename, SID, or TNSName: ’sid’,
’host/sid’, ’host/service name’, or ’tnsname’
• port. Specifies the port on the server (default = 1521)
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"(PROTOCOL=tcps)")
• oracleDriver. The driver to be used. Choose between "thin" or "oci".
• pathToDriver. The path to the folder containing the Oracle JDBC driver JAR files.
Microsoft SQL Server:
• user. The user used to log in to the server. If the user is not specified, Windows Integrated
Security will be used, which requires the SQL Server JDBC drivers to be installed (see details
below).
• password. The password used to log on to the server
• server. This field contains the host name of the server
• port. Not used for SQL Server
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"encrypt=true; trustServerCertificate=false;")
• pathToDriver. The path to the folder containing the SQL Server JDBC driver JAR files.
Microsoft PDW:
• user. The user used to log in to the server. If the user is not specified, Windows Integrated
Security will be used, which requires the SQL Server JDBC drivers to be installed (see details
below).
• password. The password used to log on to the server
• server. This field contains the host name of the server
• port. Not used for SQL Server
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"encrypt=true; trustServerCertificate=false;")
• pathToDriver. The path to the folder containing the SQL Server JDBC driver JAR files.
PostgreSQL:
• user. The user used to log in to the server
• password. The password used to log on to the server
• server. This field contains the host name of the server and the database holding the relevant
schemas: host/database
• port. Specifies the port on the server (default = 5432)
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"ssl=true")
• pathToDriver. The path to the folder containing the PostgreSQL JDBC driver JAR files.
Redshift:
• user. The user used to log in to the server
• password. The password used to log on to the server
• server. This field contains the host name of the server and the database holding the relevant
schemas: host/database
• port. Specifies the port on the server (default = 5439)
• ‘extraSettings The configuration settings for the connection (i.e. SSL Settings such as "ssl=true&sslfactory=com.a
• pathToDriver. The path to the folder containing the RedShift JDBC driver JAR files.
Netezza:
• user. The user used to log in to the server
• password. The password used to log on to the server
• server. This field contains the host name of the server and the database holding the relevant
schemas: host/database
• port. Specifies the port on the server (default = 5480)
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"ssl=true")
• pathToDriver. The path to the folder containing the Netezza JDBC driver JAR file (nzjdbc.jar).
Impala:
• user. The user name used to access the server
• password. The password for that user
• server. The host name of the server
• port. Specifies the port on the server (default = 21050)
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"SSLKeyStorePwd=*****")
• pathToDriver. The path to the folder containing the Impala JDBC driver JAR files.
SQLite:
• server. The path to the SQLIte file.
Spark / Databricks:
Currently both JDBC and ODBC connections are supported for Spark. Set the connectionString
argument to use JDBC, otherwise ODBC is used:
• connectionString. The JDBC connection string (e.g. something like ’jdbc:databricks://my-
org.cloud.databricks.com:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/1.0/warehouses/abcd
• user. The user name used to access the server. This can be set to ’token’ when using a
personal token (recommended).
• password. The password for that user. This should be your personal token when using a
personal token (recommended).
• server. The host name of the server (when using ODBC), e.g. ’my-org.cloud.databricks.com’)
• port. Specifies the port on the server (when using ODBC)
• extraSettings. Additional settings for the ODBC connection, for example extraSettings
= list(HTTPPath = "/sql/1.0/warehouses/abcde12345", SSL = 1, ThriftTransport =
2, AuthMech = 3)
Snowflake:
• connectionString. The connection string (e.g. starting with ’jdbc:snowflake://host:port/?db=database’).
• user. The user name used to access the server.
• password. The password for that user.
Windows authentication for SQL Server::
To be able to use Windows authentication for SQL Server (and PDW), you have to install the
JDBC driver. Download the version 9.2.0 .zip from Microsoft and extract its contents to a
folder. In the extracted folder you will find the file sqljdbc_9.2/enu/auth/x64/mssql-jdbc_auth-
9.2.0.x64.dll (64-bits) or ssqljdbc_9.2/enu/auth/x86/mssql-jdbc_auth-9.2.0.x86.dll (32-bits), which
needs to be moved to location on the system path, for example to c:/windows/system32. If you
not have write access to any folder in the system path, you can also specify the path to the folder
containing the dll by setting the environmental variable PATH_TO_AUTH_DLL, so for example
Sys.setenv("PATH_TO_AUTH_DLL" = "c:/temp") Note that the environmental variable needs to
be set before calling connect() for the first time.
Arguments
connectionDetails
An object of class connectionDetails as created by the createConnectionDetails()
function.
dbms The type of DBMS running on the server. Valid values are
• "oracle" for Oracle
• "postgresql" for PostgreSQL
• "redshift" for Amazon Redshift
• "sql server" for Microsoft SQL Server
• "pdw" for Microsoft Parallel Data Warehouse (PDW)
• "netezza" for IBM Netezza
• "bigquery" for Google BigQuery
• "sqlite" for SQLite
• "sqlite extended" for SQLite with extended types (DATE and DATETIME)
• "spark" for Spark
• "snowflake" for Snowflake
user The user name used to access the server.
password The password for that user.
server The name of the server.
port (optional) The port on the server to connect to.
extraSettings (optional) Additional configuration settings specific to the database provider to
configure things as security for SSL. For connections using JDBC these will
be appended to end of the connection string. For connections using DBI, these
settings will additionally be used to call dbConnect().
oracleDriver Specify which Oracle drive you want to use. Choose between "thin" or "oci".
connectionString
The JDBC connection string. If specified, the server, port, extraSettings,
and oracleDriver fields are ignored. If user and password are not specified,
they are assumed to already be included in the connection string.
pathToDriver Path to a folder containing the JDBC driver JAR files. See downloadJdbcDrivers()
for instructions on how to download the relevant drivers.
Details
This function creates a connection to a database.
Value
An object that extends DBIConnection in a database-specific manner. This object is used to direct
commands to the database engine.
Examples
## Not run:
connectionDetails <- createConnectionDetails(
dbms = "postgresql",
server = "localhost/postgres",
user = "root",
password = "xxx"
)
conn <- connect(connectionDetails)
dbGetQuery(conn, "SELECT COUNT(*) FROM person")
disconnect(conn)
conn <- connect(dbms = "sql server", server = "RNDUSRDHIT06.jnj.com")
dbGetQuery(conn, "SELECT COUNT(*) FROM concept")
disconnect(conn)
conn <- connect(
dbms = "oracle",
server = "127.0.0.1/xe",
user = "system",
password = "xxx",
pathToDriver = "c:/temp"
)
dbGetQuery(conn, "SELECT COUNT(*) FROM test_table")
disconnect(conn)
conn <- connect(
dbms = "postgresql",
connectionString = "jdbc:postgresql://127.0.0.1:5432/cmd_database"
)
dbGetQuery(conn, "SELECT COUNT(*) FROM person")
disconnect(conn)
## End(Not run)
createConnectionDetails
createConnectionDetails
Description
Creates a list containing all details needed to connect to a database. There are three ways to call this
function:
• createConnectionDetails(dbms, user, password, server, port, extraSettings, oracleDriver,
pathToDriver)
• createConnectionDetails(dbms, connectionString, pathToDriver)
• createConnectionDetails(dbms, connectionString, user, password, pathToDriver)
DBMS parameter details::
Depending on the DBMS, the function arguments have slightly different interpretations:
Oracle:
• user. The user name used to access the server
• password. The password for that user
• server. This field contains the SID, or host and servicename, SID, or TNSName: ’sid’,
’host/sid’, ’host/service name’, or ’tnsname’
• port. Specifies the port on the server (default = 1521)
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"(PROTOCOL=tcps)")
• oracleDriver. The driver to be used. Choose between "thin" or "oci".
• pathToDriver. The path to the folder containing the Oracle JDBC driver JAR files.
Microsoft SQL Server:
• user. The user used to log in to the server. If the user is not specified, Windows Integrated
Security will be used, which requires the SQL Server JDBC drivers to be installed (see details
below).
• password. The password used to log on to the server
• server. This field contains the host name of the server
• port. Not used for SQL Server
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"encrypt=true; trustServerCertificate=false;")
• pathToDriver. The path to the folder containing the SQL Server JDBC driver JAR files.
Microsoft PDW:
• user. The user used to log in to the server. If the user is not specified, Windows Integrated
Security will be used, which requires the SQL Server JDBC drivers to be installed (see details
below).
• password. The password used to log on to the server
• server. This field contains the host name of the server
• port. Not used for SQL Server
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"encrypt=true; trustServerCertificate=false;")
• pathToDriver. The path to the folder containing the SQL Server JDBC driver JAR files.
PostgreSQL:
• user. The user used to log in to the server
• password. The password used to log on to the server
• server. This field contains the host name of the server and the database holding the relevant
schemas: host/database
• port. Specifies the port on the server (default = 5432)
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"ssl=true")
• pathToDriver. The path to the folder containing the PostgreSQL JDBC driver JAR files.
Redshift:
• user. The user used to log in to the server
• password. The password used to log on to the server
• server. This field contains the host name of the server and the database holding the relevant
schemas: host/database
• port. Specifies the port on the server (default = 5439)
• ‘extraSettings The configuration settings for the connection (i.e. SSL Settings such as "ssl=true&sslfactory=com.a
• pathToDriver. The path to the folder containing the RedShift JDBC driver JAR files.
Netezza:
• user. The user used to log in to the server
• password. The password used to log on to the server
• server. This field contains the host name of the server and the database holding the relevant
schemas: host/database
• port. Specifies the port on the server (default = 5480)
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"ssl=true")
• pathToDriver. The path to the folder containing the Netezza JDBC driver JAR file (nzjdbc.jar).
Impala:
• user. The user name used to access the server
• password. The password for that user
• server. The host name of the server
• port. Specifies the port on the server (default = 21050)
• extraSettings. The configuration settings for the connection (i.e. SSL Settings such as
"SSLKeyStorePwd=*****")
• pathToDriver. The path to the folder containing the Impala JDBC driver JAR files.
SQLite:
• server. The path to the SQLIte file.
Spark / Databricks:
Currently both JDBC and ODBC connections are supported for Spark. Set the connectionString
argument to use JDBC, otherwise ODBC is used:
• connectionString. The JDBC connection string (e.g. something like ’jdbc:databricks://my-
org.cloud.databricks.com:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/1.0/warehouses/abcd
• user. The user name used to access the server. This can be set to ’token’ when using a
personal token (recommended).
• password. The password for that user. This should be your personal token when using a
personal token (recommended).
• server. The host name of the server (when using ODBC), e.g. ’my-org.cloud.databricks.com’)
• port. Specifies the port on the server (when using ODBC)
• extraSettings. Additional settings for the ODBC connection, for example extraSettings
= list(HTTPPath = "/sql/1.0/warehouses/abcde12345", SSL = 1, ThriftTransport =
2, AuthMech = 3)
Snowflake:
• connectionString. The connection string (e.g. starting with ’jdbc:snowflake://host:port/?db=database’).
• user. The user name used to access the server.
• password. The password for that user.
Windows authentication for SQL Server::
To be able to use Windows authentication for SQL Server (and PDW), you have to install the
JDBC driver. Download the version 9.2.0 .zip from Microsoft and extract its contents to a
folder. In the extracted folder you will find the file sqljdbc_9.2/enu/auth/x64/mssql-jdbc_auth-
9.2.0.x64.dll (64-bits) or ssqljdbc_9.2/enu/auth/x86/mssql-jdbc_auth-9.2.0.x86.dll (32-bits), which
needs to be moved to location on the system path, for example to c:/windows/system32. If you
not have write access to any folder in the system path, you can also specify the path to the folder
containing the dll by setting the environmental variable PATH_TO_AUTH_DLL, so for example
Sys.setenv("PATH_TO_AUTH_DLL" = "c:/temp") Note that the environmental variable needs to
be set before calling connect() for the first time.
Arguments
dbms The type of DBMS running on the server. Valid values are
• "oracle" for Oracle
• "postgresql" for PostgreSQL
• "redshift" for Amazon Redshift
• "sql server" for Microsoft SQL Server
• "pdw" for Microsoft Parallel Data Warehouse (PDW)
• "netezza" for IBM Netezza
• "bigquery" for Google BigQuery
• "sqlite" for SQLite
• "sqlite extended" for SQLite with extended types (DATE and DATETIME)
• "spark" for Spark
• "snowflake" for Snowflake
user The user name used to access the server.
password The password for that user.
server The name of the server.
port (optional) The port on the server to connect to.
extraSettings (optional) Additional configuration settings specific to the database provider to
configure things as security for SSL. For connections using JDBC these will
be appended to end of the connection string. For connections using DBI, these
settings will additionally be used to call dbConnect().
oracleDriver Specify which Oracle drive you want to use. Choose between "thin" or "oci".
connectionString
The JDBC connection string. If specified, the server, port, extraSettings,
and oracleDriver fields are ignored. If user and password are not specified,
they are assumed to already be included in the connection string.
pathToDriver Path to a folder containing the JDBC driver JAR files. See downloadJdbcDrivers()
for instructions on how to download the relevant drivers.
Details
This function creates a list containing all details needed to connect to a database. The list can then
be used in the connect() function.
It is highly recommended to use a secure approach to storing credentials, so not to have your cre-
dentials in plain text in your R scripts. The examples demonstrate how to use the keyring package.
Value
A list with all the details needed to connect to a database.
Examples
## Not run:
# Needs to be done only once on a machine. Credentials will then be stored in
# the operating system's secure credential manager:
keyring::key_set_with_value("server", password = "localhost/postgres")
keyring::key_set_with_value("user", password = "root")
keyring::key_set_with_value("password", password = "secret")
# Create connection details using keyring. Note: the connection details will
# not store the credentials themselves, but the reference to get the credentials.
connectionDetails <- createConnectionDetails(
dbms = "postgresql",
server = keyring::key_get("server"),
user = keyring::key_get("user"),
password = keyring::key_get("password"),
)
conn <- connect(connectionDetails)
dbGetQuery(conn, "SELECT COUNT(*) FROM person")
disconnect(conn)
## End(Not run)
createDbiConnectionDetails
Create DBI connection details
Description
For advanced users only. This function will allow DatabaseConnector to wrap any DBI driver. Us-
ing a driver that DatabaseConnector hasn’t been tested with may give unpredictable performance.
Use at your own risk. No support will be provided.
Usage
createDbiConnectionDetails(dbms, drv, ...)
Arguments
dbms The type of DBMS running on the server. Valid values are
• "oracle" for Oracle
• "postgresql" for PostgreSQL
• "redshift" for Amazon Redshift
• "sql server" for Microsoft SQL Server
• "pdw" for Microsoft Parallel Data Warehouse (PDW)
• "netezza" for IBM Netezza
• "bigquery" for Google BigQuery
• "sqlite" for SQLite
• "sqlite extended" for SQLite with extended types (DATE and DATETIME)
• "spark" for Spark
• "snowflake" for Snowflake
drv An object that inherits from DBIDriver, or an existing DBIConnection object (in
order to clone an existing connection).
... authentication arguments needed by the DBMS instance; these typically in-
clude user, password, host, port, dbname, etc. For details see the appropriate
DBIDriver
Value
A list with all the details needed to connect to a database.
createZipFile Compress files and/or folders into a single zip file
Description
Compress files and/or folders into a single zip file
Usage
createZipFile(zipFile, files, rootFolder = getwd(), compressionLevel = 9)
Arguments
zipFile The path to the zip file to be created.
files The files and/or folders to be included in the zip file. Folders will be included
recursively.
rootFolder The root folder. All files will be stored with relative paths relative to this folder.
compressionLevel
A number between 1 and 9. 9 compresses best, but it also takes the longest.
Details
Uses Java’s compression library to create a zip file. It is similar to utils::zip, except that it does
not require an external zip tool to be available on the system path.
DatabaseConnectorDriver
Create a DatabaseConnectorDriver object
Description
Create a DatabaseConnectorDriver object
Usage
DatabaseConnectorDriver()
dateAdd Add an interval to a date
Description
This function is provided primarily to be used together with dbplyr when querying a database. It
will also work in dplyr against data frames.
Usage
dateAdd(interval, number, date)
Arguments
interval Unit for the interval. Can be "day", "week", "month", "year".
number The number of units to add to the date.
date The date to add to.
Value
A new date.
Examples
dateAdd("day", 10, as.Date("2000-01-01"))
dateDiff Compute difference between dates
Description
This function is provided primarily to be used together with dbplyr when querying a database. It
will also work in dplyr against data frames.
Usage
dateDiff(interval, date1, date2)
Arguments
interval Unit for the interval. Can be "day", "week", "month", "year".
date1 The first date.
date2 The second date.
Value
The numeric value of the difference.
Examples
dateDiff("day", as.Date("2000-01-01"), as.Date("2000-03-01"))
dateFromParts Construct a date from parts
Description
This function is provided primarily to be used together with dbplyr when querying a database. It
will also work in dplyr against data frames.
Usage
dateFromParts(year, month, day)
Arguments
year The calendar year.
month The calendar month (1 = January).
day The day of the month.
Value
The date.
16 dbAppendTable,DatabaseConnectorConnection,character-method
Examples
dateFromParts(2000, 1, 5)
day Extract the day from a date
Description
This function is provided primarily to be used together with dbplyr when querying a database. It
will also work in dplyr against data frames.
Usage
day(date)
Arguments
date The date.
Value
The day
Examples
day(as.Date("2000-02-01"))
dbAppendTable,DatabaseConnectorConnection,character-method
Insert rows into a table
Description
The dbAppendTable() method assumes that the table has been created beforehand, e.g. with
dbCreateTable(). The default implementation calls sqlAppendTableTemplate() and then dbExecute()
with the param argument. Backends compliant to ANSI SQL 99 which use ? as a placeholder for
prepared queries don’t need to override it. Backends with a different SQL syntax which use ? as a
placeholder for prepared queries can override sqlAppendTable(). Other backends (with different
placeholders or with entirely different ways to create tables) need to override the dbAppendTable()
method.
dbAppendTable,DatabaseConnectorConnection,character-method 17
Usage
## S4 method for signature 'DatabaseConnectorConnection,character'
dbAppendTable(
conn,
name,
value,
databaseSchema = NULL,
temporary = FALSE,
oracleTempSchema = NULL,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
...,
row.names = NULL
)
Arguments
conn A DBIConnection object, as returned by dbConnect().
name The table name, passed on to dbQuoteIdentifier(). Options are:
• a character string with the unquoted DBMS table name, e.g. "table_name",
• a call to Id() with components to the fully qualified table name, e.g. Id(schema
= "my_schema", table = "table_name")
• a call to SQL() with the quoted and fully qualified table name given verba-
tim, e.g. SQL('"my_schema"."table_name"')
value A data frame of values. The column names must be consistent with those in the
target table in the database.
databaseSchema The name of the database schema. See details for platform-specific details.
temporary Should the table created as a temp table?
oracleTempSchema
DEPRECATED: use tempEmulationSchema instead.
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
... Other parameters passed on to methods.
row.names Must be NULL.
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
dbAppendTable() returns a scalar numeric.
18 dbClearResult,DatabaseConnectorJdbcResult-method
See Also
Other DBIConnection generics: DBIConnection-class, dbCreateTable(), dbDataType(), dbDisconnect(),
dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(), dbGetQuery(), dbIsReadOnly(),
dbIsValid(), dbListFields(), dbListObjects(), dbListResults(), dbListTables(), dbReadTable(),
dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
dbClearResult,DatabaseConnectorDbiResult-method
Clear a result set
Description
Frees all resources (local and remote) associated with a result set. This step is mandatory for all
objects obtained by calling dbSendQuery() or dbSendStatement().
Usage
## S4 method for signature 'DatabaseConnectorDbiResult'
dbClearResult(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbClearResult() returns TRUE, invisibly, for result sets obtained from both dbSendQuery() and
dbSendStatement().
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbColumnInfo(), dbFetch(), dbGetInfo(),
dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbClearResult,DatabaseConnectorJdbcResult-method
Clear a result set
Description
Frees all resources (local and remote) associated with a result set. This step is mandatory for all
objects obtained by calling dbSendQuery() or dbSendStatement().
Usage
## S4 method for signature 'DatabaseConnectorJdbcResult'
dbClearResult(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbClearResult() returns TRUE, invisibly, for result sets obtained from both dbSendQuery() and
dbSendStatement().
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbColumnInfo(), dbFetch(), dbGetInfo(),
dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbColumnInfo,DatabaseConnectorDbiResult-method
Information about result types
Description
Produces a data.frame that describes the output of a query. The data.frame should have as many
rows as there are output fields in the result set, and each column in the data.frame describes an
aspect of the result set field (field name, type, etc.)
Usage
## S4 method for signature 'DatabaseConnectorDbiResult'
dbColumnInfo(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbColumnInfo() returns a data frame with at least two columns "name" and "type" (in that order)
(and optional columns that start with a dot). The "name" and "type" columns contain the names
and types of the R columns of the data frame that is returned from dbFetch(). The "type" column
is of type character and only for information. Do not compute on the "type" column, instead use
dbFetch(res, n = 0) to create a zero-row data frame initialized with the correct data types.
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbFetch(), dbGetInfo(),
dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbColumnInfo,DatabaseConnectorJdbcResult-method
Information about result types
Description
Produces a data.frame that describes the output of a query. The data.frame should have as many
rows as there are output fields in the result set, and each column in the data.frame describes an
aspect of the result set field (field name, type, etc.)
Usage
## S4 method for signature 'DatabaseConnectorJdbcResult'
dbColumnInfo(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbColumnInfo() returns a data frame with at least two columns "name" and "type" (in that order)
(and optional columns that start with a dot). The "name" and "type" columns contain the names
and types of the R columns of the data frame that is returned from dbFetch(). The "type" column
is of type character and only for information. Do not compute on the "type" column, instead use
dbFetch(res, n = 0) to create a zero-row data frame initialized with the correct data types.
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbFetch(), dbGetInfo(),
dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbConnect,DatabaseConnectorDriver-method
Create a connection to a DBMS
Description
Connect to a database. This function is synonymous with the connect() function. except a dummy
driver needs to be specified
Usage
## S4 method for signature 'DatabaseConnectorDriver'
dbConnect(drv, ...)
Arguments
drv The result of the DatabaseConnectorDriver() function
... Other parameters. These are the same as expected by the connect() function.
Value
Returns a DatabaseConnectorConnection object that can be used with most of the other functions
in this package.
Examples
## Not run:
conn <- dbConnect(DatabaseConnectorDriver(),
dbms = "postgresql",
server = "localhost/ohdsi",
user = "joe",
password = "secret"
)
querySql(conn, "SELECT * FROM cdm_synpuf.person;")
dbDisconnect(conn)
## End(Not run)
dbCreateTable,DatabaseConnectorConnection-method
Create a table in the database
Description
The default dbCreateTable() method calls sqlCreateTable() and dbExecute(). Backends
compliant to ANSI SQL 99 don’t need to override it. Backends with a different SQL syntax can
override sqlCreateTable(), backends with entirely different ways to create tables need to override
this method.
Usage
## S4 method for signature 'DatabaseConnectorConnection'
dbCreateTable(
conn,
name,
fields,
databaseSchema = NULL,
oracleTempSchema = NULL,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
...,
row.names = NULL,
temporary = FALSE
)
22 dbCreateTable,DatabaseConnectorConnection-method
Arguments
conn A DBIConnection object, as returned by dbConnect().
name The table name, passed on to dbQuoteIdentifier(). Options are:
• a character string with the unquoted DBMS table name, e.g. "table_name",
• a call to Id() with components to the fully qualified table name, e.g. Id(schema
= "my_schema", table = "table_name")
• a call to SQL() with the quoted and fully qualified table name given verba-
tim, e.g. SQL('"my_schema"."table_name"')
fields Either a character vector or a data frame.
A named character vector: Names are column names, values are types. Names
are escaped with dbQuoteIdentifier(). Field types are unescaped.
A data frame: field types are generated using dbDataType().
databaseSchema The name of the database schema. See details for platform-specific details.
oracleTempSchema
DEPRECATED: use tempEmulationSchema instead.
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
... Other parameters passed on to methods.
row.names Must be NULL.
temporary Should the table created as a temp table?
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
dbCreateTable() returns TRUE, invisibly.
See Also
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbDataType(), dbDisconnect(),
dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(), dbGetQuery(), dbIsReadOnly(),
dbIsValid(), dbListFields(), dbListObjects(), dbListResults(), dbListTables(), dbReadTable(),
dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
dbDisconnect,DatabaseConnectorConnection-method
Disconnect (close) a connection
Description
This closes the connection, discards all pending work, and frees resources (e.g., memory, sockets).
Usage
## S4 method for signature 'DatabaseConnectorConnection'
dbDisconnect(conn)
Arguments
conn A DBIConnection object, as returned by dbConnect().
Value
dbDisconnect() returns TRUE, invisibly.
See Also
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(), dbGetQuery(),
dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(), dbListTables(),
dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
dbExecute,DatabaseConnectorConnection,character-method
Execute an update statement, query number of rows affected, and then
close result set
Description
Executes a statement and returns the number of rows affected. dbExecute() comes with a de-
fault implementation (which should work with most backends) that calls dbSendStatement(), then
dbGetRowsAffected(), ensuring that the result is always free-d by dbClearResult(). For passing
query parameters, see dbBind(), in particular the "The command execution flow" section.
Usage
## S4 method for signature 'DatabaseConnectorConnection,character'
dbExecute(conn, statement, translate = TRUE, ...)
Arguments
conn A DBIConnection object, as returned by dbConnect().
statement a character string containing SQL.
translate Translate the query using SqlRender?
... Other parameters passed on to methods.
24 dbExistsTable,DatabaseConnectorConnection,character-method
Details
You can also use dbExecute() to call a stored procedure that performs data manipulation or other
actions that do not return a result set. To execute a stored procedure that returns a result set, or a
data manipulation query that also returns a result set such as INSERT INTO ... RETURNING ...,
use dbGetQuery() instead.
Value
dbExecute() always returns a scalar numeric that specifies the number of rows affected by the
statement.
See Also
For queries: dbSendQuery() and dbGetQuery().
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExistsTable(), dbGetException(), dbGetInfo(), dbGetQuery(),
dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(), dbListTables(),
dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
dbExistsTable,DatabaseConnectorConnection,character-method
Does a table exist?
Description
Returns if a table given by name exists in the database.
Usage
## S4 method for signature 'DatabaseConnectorConnection,character'
dbExistsTable(conn, name, databaseSchema = NULL, ...)
Arguments
conn A DBIConnection object, as returned by dbConnect().
name The table name, passed on to dbQuoteIdentifier(). Options are:
• a character string with the unquoted DBMS table name, e.g. "table_name",
• a call to Id() with components to the fully qualified table name, e.g. Id(schema
= "my_schema", table = "table_name")
• a call to SQL() with the quoted and fully qualified table name given verba-
tim, e.g. SQL('"my_schema"."table_name"')
databaseSchema The name of the database schema. See details for platform-specific details.
... Other parameters passed on to methods.
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
dbExistsTable() returns a logical scalar, TRUE if the table or view specified by the name argument
exists, FALSE otherwise.
This includes temporary tables if supported by the database.
See Also
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbGetException(), dbGetInfo(), dbGetQuery(),
dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(), dbListTables(),
dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
dbFetch,DatabaseConnectorDbiResult-method
Fetch records from a previously executed query
Description
Fetch the next n elements (rows) from the result set and return them as a data.frame.
Usage
## S4 method for signature 'DatabaseConnectorDbiResult'
dbFetch(res, n = -1, ...)
Arguments
res An object inheriting from DBIResult, created by dbSendQuery().
n maximum number of records to retrieve per fetch. Use n = -1 or n = Inf to
retrieve all pending records. Some implementations may recognize other special
values.
... Other arguments passed on to methods.
Details
fetch() is provided for compatibility with older DBI clients - for all new code you are strongly
encouraged to use dbFetch(). The default implementation for dbFetch() calls fetch() so that it
is compatible with existing code. Modern backends should implement for dbFetch() only.
Value
dbFetch() always returns a data.frame with as many rows as records were fetched and as many
columns as fields in the result set, even if the result is a single value or has one or zero rows.
See Also
Close the result set with dbClearResult() as soon as you finish retrieving the records you want.
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbGetInfo(), dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(),
dbIsReadOnly(), dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(),
dbUnquoteIdentifier()
dbFetch,DatabaseConnectorJdbcResult-method
Fetch records from a previously executed query
Description
Fetch the next n elements (rows) from the result set and return them as a data.frame.
Usage
## S4 method for signature 'DatabaseConnectorJdbcResult'
dbFetch(res, n = -1, ...)
Arguments
res An object inheriting from DBIResult, created by dbSendQuery().
n maximum number of records to retrieve per fetch. Use n = -1 or n = Inf to
retrieve all pending records. Some implementations may recognize other special
values.
... Other arguments passed on to methods.
Details
fetch() is provided for compatibility with older DBI clients - for all new code you are strongly
encouraged to use dbFetch(). The default implementation for dbFetch() calls fetch() so that it
is compatible with existing code. Modern backends should implement for dbFetch() only.
Value
dbFetch() always returns a data.frame with as many rows as records were fetched and as many
columns as fields in the result set, even if the result is a single value or has one or zero rows.
See Also
Close the result set with dbClearResult() as soon as you finish retrieving the records you want.
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbGetInfo(), dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(),
dbIsReadOnly(), dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(),
dbUnquoteIdentifier()
dbGetInfo,DatabaseConnectorConnection-method
Get DBMS metadata
Description
Retrieves information on objects of class DBIDriver, DBIConnection or DBIResult.
Usage
## S4 method for signature 'DatabaseConnectorConnection'
dbGetInfo(dbObj, ...)
Arguments
dbObj An object inheriting from DBIObject, i.e. DBIDriver, DBIConnection, or a
DBIResult
... Other arguments to methods.
Value
For objects of class DBIDriver, dbGetInfo() returns a named list that contains at least the following
components:
• driver.version: the package version of the DBI backend,
• client.version: the version of the DBMS client library.
For objects of class DBIConnection, dbGetInfo() returns a named list that contains at least the
following components:
• db.version: version of the database server,
• dbname: database name,
• username: username to connect to the database,
• host: hostname of the database server,
• port: port on the database server. It must not contain a password component. Components
that are not applicable should be set to NA.
For objects of class DBIResult, dbGetInfo() returns a named list that contains at least the following
components:
• statatment: the statement used with dbSendQuery() or dbExecute(), as returned by dbGetStatement(),
• row.count: the number of rows fetched so far (for queries), as returned by dbGetRowCount(),
• rows.affected: the number of rows affected (for statements), as returned by dbGetRowsAffected()
• has.completed: a logical that indicates if the query or statement has completed, as returned
by dbHasCompleted().
See Also
Other DBIDriver generics: DBIDriver-class, dbCanConnect(), dbConnect(), dbDataType(),
dbDriver(), dbIsReadOnly(), dbIsValid(), dbListConnections()
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetQuery(),
dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(), dbListTables(),
dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(),
dbIsReadOnly(), dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(),
dbUnquoteIdentifier()
dbGetInfo,DatabaseConnectorDriver-method
Get DBMS metadata
Description
Retrieves information on objects of class DBIDriver, DBIConnection or DBIResult.
Usage
## S4 method for signature 'DatabaseConnectorDriver'
dbGetInfo(dbObj, ...)
Arguments
dbObj An object inheriting from DBIObject, i.e. DBIDriver, DBIConnection, or a
DBIResult
... Other arguments to methods.
Value
For objects of class DBIDriver, dbGetInfo() returns a named list that contains at least the following
components:
• driver.version: the package version of the DBI backend,
• client.version: the version of the DBMS client library.
For objects of class DBIConnection, dbGetInfo() returns a named list that contains at least the
following components:
• db.version: version of the database server,
• dbname: database name,
• username: username to connect to the database,
• host: hostname of the database server,
• port: port on the database server. It must not contain a password component. Components
that are not applicable should be set to NA.
dbGetQuery,DatabaseConnectorConnection,character-method 29
For objects of class DBIResult, dbGetInfo() returns a named list that contains at least the following
components:
• statatment: the statement used with dbSendQuery() or dbExecute(), as returned by dbGetStatement(),
• row.count: the number of rows fetched so far (for queries), as returned by dbGetRowCount(),
• rows.affected: the number of rows affected (for statements), as returned by dbGetRowsAffected()
• has.completed: a logical that indicates if the query or statement has completed, as returned
by dbHasCompleted().
See Also
Other DBIDriver generics: DBIDriver-class, dbCanConnect(), dbConnect(), dbDataType(),
dbDriver(), dbIsReadOnly(), dbIsValid(), dbListConnections()
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetQuery(),
dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(), dbListTables(),
dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(),
dbIsReadOnly(), dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(),
dbUnquoteIdentifier()
dbGetQuery,DatabaseConnectorConnection,character-method
Send query, retrieve results and then clear result set
Description
Returns the result of a query as a data frame. dbGetQuery() comes with a default implementation
(which should work with most backends) that calls dbSendQuery(), then dbFetch(), ensuring
that the result is always free-d by dbClearResult(). For retrieving chunked/paged results or for
passing query parameters, see dbSendQuery(), in particular the "The data retrieval flow" section.
Usage
## S4 method for signature 'DatabaseConnectorConnection,character'
dbGetQuery(conn, statement, translate = TRUE, ...)
Arguments
conn A DBIConnection object, as returned by dbConnect().
statement a character string containing SQL.
translate Translate the query using SqlRender?
... Other parameters passed on to methods.
Details
This method is for SELECT queries only (incl. other SQL statements that return a SELECT-alike re-
sult, e. g. execution of a stored procedure or data manipulation queries like INSERT INTO ... RETURNING ...).
To execute a stored procedure that does not return a result set, use dbExecute().
Some backends may support data manipulation statements through this method for compatibility
reasons. However, callers are strongly advised to use dbExecute() for data manipulation state-
ments.
Value
dbGetQuery() always returns a data.frame with as many rows as records were fetched and as many
columns as fields in the result set, even if the result is a single value or has one or zero rows.
See Also
For updates: dbSendStatement() and dbExecute().
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(), dbListTables(),
dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
dbGetRowCount,DatabaseConnectorDbiResult-method
The number of rows fetched so far
Description
Returns the total number of rows actually fetched with calls to dbFetch() for this result set.
Usage
## S4 method for signature 'DatabaseConnectorDbiResult'
dbGetRowCount(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbGetRowCount() returns a scalar number (integer or numeric), the number of rows fetched so far.
After calling dbSendQuery(), the row count is initially zero. After a call to dbFetch() without
limit, the row count matches the total number of rows returned. Fetching a limited number of rows
increases the number of rows by the number of rows returned, even if fetching past the end of
the result set. For queries with an empty result set, zero is returned even after fetching. For data
manipulation statements issued with dbSendStatement(), zero is returned before and after calling
dbFetch().
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetInfo(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbGetRowCount,DatabaseConnectorJdbcResult-method
The number of rows fetched so far
Description
Returns the total number of rows actually fetched with calls to dbFetch() for this result set.
Usage
## S4 method for signature 'DatabaseConnectorJdbcResult'
dbGetRowCount(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbGetRowCount() returns a scalar number (integer or numeric), the number of rows fetched so far.
After calling dbSendQuery(), the row count is initially zero. After a call to dbFetch() without
limit, the row count matches the total number of rows returned. Fetching a limited number of rows
increases the number of rows by the number of rows returned, even if fetching past the end of
the result set. For queries with an empty result set, zero is returned even after fetching. For data
manipulation statements issued with dbSendStatement(), zero is returned before and after calling
dbFetch().
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetInfo(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
32 dbGetRowsAffected,DatabaseConnectorJdbcResult-method
dbGetRowsAffected,DatabaseConnectorDbiResult-method
The number of rows affected
Description
This method returns the number of rows that were added, deleted, or updated by a data manipulation
statement.
Usage
## S4 method for signature 'DatabaseConnectorDbiResult'
dbGetRowsAffected(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbGetRowsAffected() returns a scalar number (integer or numeric), the number of rows affected
by a data manipulation statement issued with dbSendStatement(). The value is available directly
after the call and does not change after calling dbFetch(). For queries issued with dbSendQuery(),
zero is returned before and after the call to dbFetch().
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetInfo(), dbGetRowCount(), dbGetStatement(), dbHasCompleted(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbGetRowsAffected,DatabaseConnectorJdbcResult-method
The number of rows affected
Description
This method returns the number of rows that were added, deleted, or updated by a data manipulation
statement.
Usage
## S4 method for signature 'DatabaseConnectorJdbcResult'
dbGetRowsAffected(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbGetRowsAffected() returns a scalar number (integer or numeric), the number of rows affected
by a data manipulation statement issued with dbSendStatement(). The value is available directly
after the call and does not change after calling dbFetch(). For queries issued with dbSendQuery(),
zero is returned before and after the call to dbFetch().
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetInfo(), dbGetRowCount(), dbGetStatement(), dbHasCompleted(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbGetStatement,DatabaseConnectorDbiResult-method
Get the statement associated with a result set
Description
Returns the statement that was passed to dbSendQuery() or dbSendStatement().
Usage
## S4 method for signature 'DatabaseConnectorDbiResult'
dbGetStatement(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbGetStatement() returns a string, the query used in either dbSendQuery() or dbSendStatement().
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetInfo(), dbGetRowCount(), dbGetRowsAffected(), dbHasCompleted(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
34 dbHasCompleted,DatabaseConnectorDbiResult-method
dbGetStatement,DatabaseConnectorJdbcResult-method
Get the statement associated with a result set
Description
Returns the statement that was passed to dbSendQuery() or dbSendStatement().
Usage
## S4 method for signature 'DatabaseConnectorJdbcResult'
dbGetStatement(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbGetStatement() returns a string, the query used in either dbSendQuery() or dbSendStatement().
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetInfo(), dbGetRowCount(), dbGetRowsAffected(), dbHasCompleted(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbHasCompleted,DatabaseConnectorDbiResult-method
Completion status
Description
This method returns if the operation has completed. A SELECT query is completed if all rows have
been fetched. A data manipulation statement is always completed.
Usage
## S4 method for signature 'DatabaseConnectorDbiResult'
dbHasCompleted(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
dbHasCompleted,DatabaseConnectorJdbcResult-method 35
Value
dbHasCompleted() returns a logical scalar. For a query initiated by dbSendQuery() with non-
empty result set, dbHasCompleted() returns FALSE initially and TRUE after calling dbFetch()
without limit. For a query initiated by dbSendStatement(), dbHasCompleted() always returns
TRUE.
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetInfo(), dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbHasCompleted,DatabaseConnectorJdbcResult-method
Completion status
Description
This method returns if the operation has completed. A SELECT query is completed if all rows have
been fetched. A data manipulation statement is always completed.
Usage
## S4 method for signature 'DatabaseConnectorJdbcResult'
dbHasCompleted(res, ...)
Arguments
res An object inheriting from DBIResult.
... Other arguments passed on to methods.
Value
dbHasCompleted() returns a logical scalar. For a query initiated by dbSendQuery() with non-
empty result set, dbHasCompleted() returns FALSE initially and TRUE after calling dbFetch()
without limit. For a query initiated by dbSendStatement(), dbHasCompleted() always returns
TRUE.
See Also
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetInfo(), dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbIsReadOnly(),
dbIsValid(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbIsValid,DatabaseConnectorDbiConnection-method
Is this DBMS object still valid?
Description
This generic tests whether a database object is still valid (i.e. it hasn’t been disconnected or cleared).
Usage
## S4 method for signature 'DatabaseConnectorDbiConnection'
dbIsValid(dbObj, ...)
Arguments
dbObj An object inheriting from DBIObject, i.e. DBIDriver, DBIConnection, or a
DBIResult
... Other arguments to methods.
Value
dbIsValid() returns a logical scalar, TRUE if the object specified by dbObj is valid, FALSE oth-
erwise. A DBIConnection object is initially valid, and becomes invalid after disconnecting with
dbDisconnect(). For an invalid connection object (e.g., for some drivers if the object is saved
to a file and then restored), the method also returns FALSE. A DBIResult object is valid after a
call to dbSendQuery(), and stays valid even after all rows have been fetched; only clearing it with
dbClearResult() invalidates it. A DBIResult object is also valid after a call to dbSendStatement(),
and stays valid after querying the number of rows affected; only clearing it with dbClearResult()
invalidates it. If the connection to the database system is dropped (e.g., due to connectivity prob-
lems, server failure, etc.), dbIsValid() should return FALSE. This is not tested automatically.
See Also
Other DBIDriver generics: DBIDriver-class, dbCanConnect(), dbConnect(), dbDataType(),
dbDriver(), dbGetInfo(), dbIsReadOnly(), dbListConnections()
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbGetQuery(), dbIsReadOnly(), dbListFields(), dbListObjects(), dbListResults(), dbListTables(),
dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetInfo(), dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(),
dbIsReadOnly(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
dbIsValid,DatabaseConnectorJdbcConnection-method
Is this DBMS object still valid?
Description
This generic tests whether a database object is still valid (i.e. it hasn’t been disconnected or cleared).
Usage
## S4 method for signature 'DatabaseConnectorJdbcConnection'
dbIsValid(dbObj, ...)
Arguments
dbObj An object inheriting from DBIObject, i.e. DBIDriver, DBIConnection, or a
DBIResult
... Other arguments to methods.
Value
dbIsValid() returns a logical scalar, TRUE if the object specified by dbObj is valid, FALSE oth-
erwise. A DBIConnection object is initially valid, and becomes invalid after disconnecting with
dbDisconnect(). For an invalid connection object (e.g., for some drivers if the object is saved
to a file and then restored), the method also returns FALSE. A DBIResult object is valid after a
call to dbSendQuery(), and stays valid even after all rows have been fetched; only clearing it with
dbClearResult() invalidates it. A DBIResult object is also valid after a call to dbSendStatement(),
and stays valid after querying the number of rows affected; only clearing it with dbClearResult()
invalidates it. If the connection to the database system is dropped (e.g., due to connectivity prob-
lems, server failure, etc.), dbIsValid() should return FALSE. This is not tested automatically.
See Also
Other DBIDriver generics: DBIDriver-class, dbCanConnect(), dbConnect(), dbDataType(),
dbDriver(), dbGetInfo(), dbIsReadOnly(), dbListConnections()
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbGetQuery(), dbIsReadOnly(), dbListFields(), dbListObjects(), dbListResults(), dbListTables(),
dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
Other DBIResult generics: DBIResult-class, dbBind(), dbClearResult(), dbColumnInfo(),
dbFetch(), dbGetInfo(), dbGetRowCount(), dbGetRowsAffected(), dbGetStatement(), dbHasCompleted(),
dbIsReadOnly(), dbQuoteIdentifier(), dbQuoteLiteral(), dbQuoteString(), dbUnquoteIdentifier()
38 dbListFields,DatabaseConnectorConnection,character-method
dbListFields,DatabaseConnectorConnection,character-method
List field names of a remote table
Description
Returns the field names of a remote table as a character vector.
Usage
## S4 method for signature 'DatabaseConnectorConnection,character'
dbListFields(conn, name, databaseSchema = NULL, ...)
Arguments
conn A DBIConnection object, as returned by dbConnect().
name The table name, passed on to dbQuoteIdentifier(). Options are:
• a character string with the unquoted DBMS table name, e.g. "table_name",
• a call to Id() with components to the fully qualified table name, e.g. Id(schema
= "my_schema", table = "table_name")
• a call to SQL() with the quoted and fully qualified table name given verba-
tim, e.g. SQL('"my_schema"."table_name"')
databaseSchema The name of the database schema. See details for platform-specific details.
... Other parameters passed on to methods.
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
dbListFields() returns a character vector that enumerates all fields in the table in the correct
order. This also works for temporary tables if supported by the database. The returned names are
suitable for quoting with dbQuoteIdentifier().
See Also
dbColumnInfo() to get the type of the fields.
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbGetQuery(), dbIsReadOnly(), dbIsValid(), dbListObjects(), dbListResults(), dbListTables(),
dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
dbListTables,DatabaseConnectorConnection-method
List remote tables
Description
Returns the unquoted names of remote tables accessible through this connection. This should in-
clude views and temporary objects, but not all database backends (in particular RMariaDB and
RMySQL) support this.
Usage
## S4 method for signature 'DatabaseConnectorConnection'
dbListTables(conn, databaseSchema = NULL, ...)
Arguments
conn A DBIConnection object, as returned by dbConnect().
databaseSchema The name of the database schema. See details for platform-specific details.
... Not used
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
dbListTables() returns a character vector that enumerates all tables and views in the database. Ta-
bles added with dbWriteTable() are part of the list. As soon a table is removed from the database,
it is also removed from the list of database tables.
The same applies to temporary tables if supported by the database.
The returned names are suitable for quoting with dbQuoteIdentifier().
See Also
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbGetQuery(), dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(),
dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
40 dbReadTable,DatabaseConnectorConnection,character-method
dbms Get the database platform from a connection
Description
The SqlRender package provides functions that translate SQL from OHDSI-SQL to a target SQL
dialect. These function need the name of the database platform to translate to. The dbms function
returns the dbms for any DBI connection that can be passed along to SqlRender translation functions
(see example).
Usage
dbms(connection)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
Value
The name of the database (dbms) used by SqlRender
Examples
library(DatabaseConnector)
con <- connect(dbms = "sqlite", server = ":memory:")
dbms(con)
#> [1] "sqlite"
SqlRender::translate("DATEADD(d, 365, dateColumn)", targetDialect = dbms(con))
#> "CAST(STRFTIME('%s', DATETIME(dateColumn, 'unixepoch', (365)||' days')) AS REAL)"
disconnect(con)
dbReadTable,DatabaseConnectorConnection,character-method
Copy data frames from database tables
Description
Reads a database table to a data frame, optionally converting a column to row names and converting
the column names to valid R identifiers.
Usage
## S4 method for signature 'DatabaseConnectorConnection,character'
dbReadTable(
conn,
name,
databaseSchema = NULL,
oracleTempSchema = NULL,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
...
)
dbReadTable,DatabaseConnectorConnection,character-method 41
Arguments
conn A DBIConnection object, as returned by dbConnect().
name The table name, passed on to dbQuoteIdentifier(). Options are:
• a character string with the unquoted DBMS table name, e.g. "table_name",
• a call to Id() with components to the fully qualified table name, e.g. Id(schema
= "my_schema", table = "table_name")
• a call to SQL() with the quoted and fully qualified table name given verba-
tim, e.g. SQL('"my_schema"."table_name"')
databaseSchema The name of the database schema. See details for platform-specific details.
oracleTempSchema
DEPRECATED: use tempEmulationSchema instead.
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
... Other parameters passed on to methods.
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
dbReadTable() returns a data frame that contains the complete data from the remote table, effec-
tively the result of calling dbGetQuery() with SELECT * FROM <name>.
An empty table is returned as a data frame with zero rows.
The presence of rownames depends on the row.names argument, see sqlColumnToRownames() for
details:
• If FALSE or NULL, the returned data frame doesn’t have row names.
• If TRUE, a column named "row_names" is converted to row names.
• If NA, a column named "row_names" is converted to row names if it exists, otherwise no
translation occurs.
• If a string, this specifies the name of the column in the remote table that contains the row
names.
The default is row.names = FALSE.
If the database supports identifiers with special characters, the columns in the returned data frame
are converted to valid R identifiers if the check.names argument is TRUE, If check.names = FALSE,
the returned table has non-syntactic column names without quotes.
42 dbRemoveTable,DatabaseConnectorConnection,ANY-method
See Also
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbGetQuery(), dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(),
dbListTables(), dbRemoveTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
dbRemoveTable,DatabaseConnectorConnection,ANY-method
Remove a table from the database
Description
Remove a remote table (e.g., created by dbWriteTable()) from the database.
Usage
## S4 method for signature 'DatabaseConnectorConnection,ANY'
dbRemoveTable(
conn,
name,
databaseSchema = NULL,
oracleTempSchema = NULL,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
...
)
Arguments
conn A DBIConnection object, as returned by dbConnect().
name The table name, passed on to dbQuoteIdentifier(). Options are:
• a character string with the unquoted DBMS table name, e.g. "table_name",
• a call to Id() with components to the fully qualified table name, e.g. Id(schema
= "my_schema", table = "table_name")
• a call to SQL() with the quoted and fully qualified table name given verba-
tim, e.g. SQL('"my_schema"."table_name"')
databaseSchema The name of the database schema. See details for platform-specific details.
oracleTempSchema
DEPRECATED: use tempEmulationSchema instead.
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
... Other parameters passed on to methods.
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
dbSendQuery,DatabaseConnectorDbiConnection,character-method 43
Value
dbRemoveTable() returns TRUE, invisibly.
See Also
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbGetQuery(), dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(),
dbListTables(), dbReadTable(), dbSendQuery(), dbSendStatement(), dbWriteTable()
dbSendQuery,DatabaseConnectorDbiConnection,character-method
Execute a query on a given database connection
Description
The dbSendQuery() method only submits and synchronously executes the SQL query to the database
engine. It does not extract any records — for that you need to use the dbFetch() method, and then
you must call dbClearResult() when you finish fetching the records you need. For interactive
use, you should almost always prefer dbGetQuery().
Usage
## S4 method for signature 'DatabaseConnectorDbiConnection,character'
dbSendQuery(conn, statement, translate = TRUE, ...)
Arguments
conn A DBIConnection object, as returned by dbConnect().
statement a character string containing SQL.
translate Translate the query using SqlRender?
... Other parameters passed on to methods.
Details
This method is for SELECT queries only. Some backends may support data manipulation queries
through this method for compatibility reasons. However, callers are strongly encouraged to use
dbSendStatement() for data manipulation statements.
The query is submitted to the database server and the DBMS executes it, possibly generating vast
amounts of data. Where these data live is driver-specific: some drivers may choose to leave the
output on the server and transfer them piecemeal to R, others may transfer all the data to the client
– but not necessarily to the memory that R manages. See individual drivers’ dbSendQuery() docu-
mentation for details.
Value
dbSendQuery() returns an S4 object that inherits from DBIResult. The result set can be used with
dbFetch() to extract records. Once you have finished using a result, make sure to clear it with
dbClearResult().
44 dbSendQuery,DatabaseConnectorJdbcConnection,character-method
See Also
For updates: dbSendStatement() and dbExecute().
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbGetQuery(), dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(),
dbListTables(), dbReadTable(), dbRemoveTable(), dbSendStatement(), dbWriteTable()
dbSendQuery,DatabaseConnectorJdbcConnection,character-method
Execute a query on a given database connection
Description
The dbSendQuery() method only submits and synchronously executes the SQL query to the database
engine. It does not extract any records — for that you need to use the dbFetch() method, and then
you must call dbClearResult() when you finish fetching the records you need. For interactive
use, you should almost always prefer dbGetQuery().
Usage
## S4 method for signature 'DatabaseConnectorJdbcConnection,character'
dbSendQuery(conn, statement, translate = TRUE, ...)
Arguments
conn A DBIConnection object, as returned by dbConnect().
statement a character string containing SQL.
translate Translate the query using SqlRender?
... Other parameters passed on to methods.
Details
This method is for SELECT queries only. Some backends may support data manipulation queries
through this method for compatibility reasons. However, callers are strongly encouraged to use
dbSendStatement() for data manipulation statements.
The query is submitted to the database server and the DBMS executes it, possibly generating vast
amounts of data. Where these data live is driver-specific: some drivers may choose to leave the
output on the server and transfer them piecemeal to R, others may transfer all the data to the client
– but not necessarily to the memory that R manages. See individual drivers’ dbSendQuery() docu-
mentation for details.
Value
dbSendQuery() returns an S4 object that inherits from DBIResult. The result set can be used with
dbFetch() to extract records. Once you have finished using a result, make sure to clear it with
dbClearResult().
dbSendStatement,DatabaseConnectorConnection,character-method 45
See Also
For updates: dbSendStatement() and dbExecute().
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbGetQuery(), dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(),
dbListTables(), dbReadTable(), dbRemoveTable(), dbSendStatement(), dbWriteTable()
dbSendStatement,DatabaseConnectorConnection,character-method
Execute a data manipulation statement on a given database connec-
tion
Description
The dbSendStatement() method only submits and synchronously executes the SQL data manipu-
lation statement (e.g., UPDATE, DELETE, INSERT INTO, DROP TABLE, ...) to the database engine. To
query the number of affected rows, call dbGetRowsAffected() on the returned result object. You
must also call dbClearResult() after that. For interactive use, you should almost always prefer
dbExecute().
Usage
## S4 method for signature 'DatabaseConnectorConnection,character'
dbSendStatement(conn, statement, translate = TRUE, ...)
Arguments
conn A DBIConnection object, as returned by dbConnect().
statement a character string containing SQL.
translate Translate the query using SqlRender?
... Other parameters passed on to methods.
Details
dbSendStatement() comes with a default implementation that simply forwards to dbSendQuery(),
to support backends that only implement the latter.
Value
dbSendStatement() returns an S4 object that inherits from DBIResult. The result set can be used
with dbGetRowsAffected() to determine the number of rows affected by the query. Once you have
finished using a result, make sure to clear it with dbClearResult().
See Also
For queries: dbSendQuery() and dbGetQuery().
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbGetQuery(), dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(),
dbListTables(), dbReadTable(), dbRemoveTable(), dbSendQuery(), dbWriteTable()
dbUnloadDriver,DatabaseConnectorDriver-method
Load and unload database drivers
Description
These methods are deprecated, please consult the documentation of the individual backends for the
construction of driver instances.
dbDriver() is a helper method used to create an new driver object given the name of a database
or the corresponding R package. It works through convention: all DBI-extending packages should
provide an exported object with the same name as the package. dbDriver() just looks for this
object in the right places: if you know what database you are connecting to, you should call the
function directly.
dbUnloadDriver() is not implemented for modern backends.
Usage
## S4 method for signature 'DatabaseConnectorDriver'
dbUnloadDriver(drv, ...)
Arguments
drv an object that inherits from DBIDriver as created by dbDriver.
... any other arguments are passed to the driver drvName.
Details
The client part of the database communication is initialized (typically dynamically loading C code,
etc.) but note that connecting to the database engine itself needs to be done through calls to
dbConnect.
Value
In the case of dbDriver, an driver object whose class extends DBIDriver. This object may be used
to create connections to the actual DBMS engine.
In the case of dbUnloadDriver, a logical indicating whether the operation succeeded or not.
See Also
Other DBIDriver generics: DBIDriver-class, dbCanConnect(), dbConnect(), dbDataType(),
dbGetInfo(), dbIsReadOnly(), dbIsValid(), dbListConnections()
Other DBIDriver generics: DBIDriver-class, dbCanConnect(), dbConnect(), dbDataType(),
dbGetInfo(), dbIsReadOnly(), dbIsValid(), dbListConnections()
dbWriteTable,DatabaseConnectorConnection,ANY-method
Copy data frames to database tables
Description
Writes, overwrites or appends a data frame to a database table, optionally converting row names to
a column and specifying SQL data types for fields.
Usage
## S4 method for signature 'DatabaseConnectorConnection,ANY'
dbWriteTable(
conn,
name,
value,
databaseSchema = NULL,
overwrite = FALSE,
append = FALSE,
temporary = FALSE,
oracleTempSchema = NULL,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
...
)
Arguments
conn A DBIConnection object, as returned by dbConnect().
name The table name, passed on to dbQuoteIdentifier(). Options are:
• a character string with the unquoted DBMS table name, e.g. "table_name",
• a call to Id() with components to the fully qualified table name, e.g. Id(schema
= "my_schema", table = "table_name")
• a call to SQL() with the quoted and fully qualified table name given verba-
tim, e.g. SQL('"my_schema"."table_name"')
value a data.frame (or coercible to data.frame).
databaseSchema The name of the database schema. See details for platform-specific details.
overwrite Overwrite an existing table (if exists)?
append Append to existing table?
temporary Should the table created as a temp table?
oracleTempSchema
DEPRECATED: use tempEmulationSchema instead.
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
... Other parameters passed on to methods.
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
dbWriteTable() returns TRUE, invisibly.
See Also
Other DBIConnection generics: DBIConnection-class, dbAppendTable(), dbCreateTable(),
dbDataType(), dbDisconnect(), dbExecute(), dbExistsTable(), dbGetException(), dbGetInfo(),
dbGetQuery(), dbIsReadOnly(), dbIsValid(), dbListFields(), dbListObjects(), dbListResults(),
dbListTables(), dbReadTable(), dbRemoveTable(), dbSendQuery(), dbSendStatement()
disconnect Disconnect from the server
Description
Close the connection to the server.
Usage
disconnect(connection)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
Examples
## Not run:
connectionDetails <- createConnectionDetails(
dbms = "postgresql",
server = "localhost",
user = "root",
password = "blah"
)
conn <- connect(connectionDetails)
count <- querySql(conn, "SELECT COUNT(*) FROM person")
disconnect(conn)
## End(Not run)
downloadJdbcDrivers Download DatabaseConnector JDBC Jar files
Description
Download the DatabaseConnector JDBC drivers from https://ohdsi.github.io/DatabaseConnectorJars/
Usage
downloadJdbcDrivers(
dbms,
pathToDriver = Sys.getenv("DATABASECONNECTOR_JAR_FOLDER"),
method = "auto",
...
)
Arguments
dbms The type of DBMS to download Jar files for.
• "postgresql" for PostgreSQL
• "redshift" for Amazon Redshift
• "sql server", "pdw" or "synapse" for Microsoft SQL Server
• "oracle" for Oracle
• "spark" for Spark
• "snowflake" for Snowflake
• "bigquery" for Google BigQuery
• "all" for all aforementioned platforms
pathToDriver The full path to the folder where the JDBC driver .jar files should be downloaded
to. By default the value of the environment variable "DATABASECONNEC-
TOR_JAR_FOLDER" is used.
method The method used for downloading files. See ?download.file for details and
options.
... Further arguments passed on to download.file.
Details
The following versions of the JDBC drivers are currently used:
• PostgreSQL: V42.2.18
• RedShift: V2.1.0.9
• SQL Server: V9.2.0
• Oracle: V19.8
• Spark: V2.6.21
• Snowflake: V3.13.22
• BigQuery: v1.3.2.1003
Value
Invisibly returns the destination if the download was successful.
Examples
## Not run:
downloadJdbcDrivers("redshift")
## End(Not run)
dropEmulatedTempTables
Drop all emulated temp tables.
Description
On some DBMSs, like Oracle and BigQuery, DatabaseConnector through SqlRender emulates
temp tables in a schema provided by the user. Ideally, these tables are deleted by the application /
R script creating them, but for various reasons orphan temp tables may remain. This function drops
all emulated temp tables created in this session only.
Usage
dropEmulatedTempTables(
connection,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema")
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
Value
Invisibly returns the list of deleted emulated temp tables.
eoMonth Return the end of the month
Description
This function is provided primarily to be used together with dbplyr when querying a database. It
will also work in dplyr against data frames.
Usage
eoMonth(date)
Arguments
date A date in the month for which we need the end.
Value
The date of the last day of the month.
Examples
eoMonth(as.Date("2000-02-01"))
executeSql Execute SQL code
Description
This function executes SQL consisting of one or more statements.
Usage
executeSql(
connection,
sql,
profile = FALSE,
progressBar = !as.logical(Sys.getenv("TESTTHAT", unset = FALSE)),
reportOverallTime = TRUE,
errorReportFile = file.path(getwd(), "errorReportSql.txt"),
runAsBatch = FALSE
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
sql The SQL to be executed
profile When true, each separate statement is written to file prior to sending to the
server, and the time taken to execute a statement is displayed.
progressBar When true, a progress bar is shown based on the statements in the SQL code.
reportOverallTime
When true, the function will display the overall time taken to execute all state-
ments.
errorReportFile
The file where an error report will be written if an error occurs. Defaults to
’errorReportSql.txt’ in the current working directory.
runAsBatch When true the SQL statements are sent to the server as a single batch, and exe-
cuted there. This will be faster if you have many small SQL statements, but there
will be no progress bar, and no per-statement error messages. If the database
platform does not support batched updates the query is executed without batch-
ing.
Details
This function splits the SQL in separate statements and sends it to the server for execution. If an
error occurs during SQL execution, this error is written to a file to facilitate debugging. Optionally,
a progress bar is shown and the total time taken to execute the SQL is displayed. Optionally, each
separate SQL statement is written to file, and the execution time per statement is shown to aid in
detecting performance issues.
Examples
## Not run:
connectionDetails <- createConnectionDetails(
dbms = "postgresql",
server = "localhost",
user = "root",
password = "blah",
schema = "cdm_v4"
)
conn <- connect(connectionDetails)
executeSql(conn, "CREATE TABLE x (k INT); CREATE TABLE y (k INT);")
disconnect(conn)
## End(Not run)
existsTable Does the table exist?
Description
Checks whether a table exists. Accounts for surrounding escape characters. Case insensitive.
Usage
existsTable(connection, databaseSchema, tableName)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
databaseSchema The name of the database schema. See details for platform-specific details.
tableName The name of the table to check.
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
A logical value indicating whether the table exits.
extractQueryTimes Extract query times from a ParallelLogger log file
Description
When using the ParallelLogger default file logger, and using options(LOG_DATABASECONNECTOR_SQL
= TRUE), DatabaseConnector will log all SQL sent to the server, and the time to get a response.
This function parses the log file, producing a data frame with time per query.
Usage
extractQueryTimes(logFileName)
Arguments
logFileName Name of the ParallelLogger log file. Assumes the file was created using the
default file logger.
Value
A data frame with queries and their run times in milliseconds.
Examples
connection <- connect(dbms = "sqlite", server = ":memory:")
logFile <- tempfile(fileext = ".log")
ParallelLogger::addDefaultFileLogger(fileName = logFile, name = "MY_LOGGER")
options(LOG_DATABASECONNECTOR_SQL = TRUE)
executeSql(connection, "CREATE TABLE test (x INT);")
querySql(connection, "SELECT * FROM test;")
extractQueryTimes(logFile)
ParallelLogger::unregisterLogger("MY_LOGGER")
unlink(logFile)
disconnect(connection)
getAvailableJavaHeapSpace
Get available Java heap space
Description
For debugging purposes: get the available Java heap space.
Usage
getAvailableJavaHeapSpace()
Value
The Java heap space (in bytes).
getTableNames List all tables in a database schema.
Description
This function returns a list of all tables in a database schema.
Usage
getTableNames(connection, databaseSchema = NULL, cast = "lower")
Arguments
connection The connection to the database server created using either connect() or dbConnect().
databaseSchema The name of the database schema. See details for platform-specific details.
cast Should the table names be cast to uppercase or lowercase before being returned?
Valid options are "upper" , "lower" (default), "none" (no casting is done)
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
A character vector of table names.
inDatabaseSchema Refer to a table in a database schema
Description
Can be used with dplyr::tbl() to indicate a table in a specific database schema.
Usage
inDatabaseSchema(databaseSchema, table)
Arguments
databaseSchema The name of the database schema. See details for platform-specific details.
table The name of the table in the database schema.
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
Value
An object representing the table and database schema.
insertTable Insert a table on the server
Description
This function sends the data in a data frame to a table on the server. Either a new table is created,
or the data is appended to an existing table.
Usage
insertTable(
connection,
databaseSchema = NULL,
tableName,
data,
dropTableIfExists = TRUE,
createTable = TRUE,
tempTable = FALSE,
oracleTempSchema = NULL,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
bulkLoad = Sys.getenv("DATABASE_CONNECTOR_BULK_UPLOAD"),
useMppBulkLoad = Sys.getenv("USE_MPP_BULK_LOAD"),
progressBar = FALSE,
camelCaseToSnakeCase = FALSE
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
databaseSchema The name of the database schema. See details for platform-specific details.
tableName The name of the table where the data should be inserted.
data The data frame containing the data to be inserted.
dropTableIfExists
Drop the table if the table already exists before writing?
createTable Create a new table? If false, will append to existing table.
tempTable Should the table created as a temp table?
oracleTempSchema
DEPRECATED: use tempEmulationSchema instead.
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
bulkLoad If using Redshift, PDW, Hive or Postgres, use more performant bulk loading
techniques. Does not work for temp tables (except for HIVE). See Details for
requirements for the various platforms.
useMppBulkLoad DEPRECATED. Use bulkLoad instead.
progressBar Show a progress bar when uploading?
camelCaseToSnakeCase
If TRUE, the data frame column names are assumed to use camelCase and are
converted to snake_case before uploading.
Details
The databaseSchema argument is interpreted differently according to the different platforms: SQL
Server and PDW: The databaseSchema schema should specify both the database and the schema,
e.g. ’my_database.dbo’. Impala: the databaseSchema should specify the database. Oracle: The
databaseSchema should specify the Oracle ’user’. All other : The databaseSchema should specify
the schema.
This function sends the data in a data frame to a table on the server. Either a new table is created,
or the data is appended to an existing table. NA values are inserted as null values in the database.
Bulk uploading:
Redshift: The MPP bulk loading relies upon the CloudyR S3 library to test a connection to an S3
bucket using AWS S3 credentials. Credentials are configured directly into the System Environ-
ment using the following keys: Sys.setenv("AWS_ACCESS_KEY_ID" = "some_access_key_id",
"AWS_SECRET_ACCESS_KEY" = "some_secret_access_key", "AWS_DEFAULT_REGION" =
"some_aws_region", "AWS_BUCKET_NAME" = "some_bucket_name", "AWS_OBJECT_KEY"
= "some_object_key", "AWS_SSE_TYPE" = "server_side_encryption_type").
PDW: The MPP bulk loading relies upon the client having a Windows OS and the DWLoader
exe installed, and the following permissions granted: –Grant BULK Load permissions - needed
at a server level USE master; GRANT ADMINISTER BULK OPERATIONS TO user; –Grant
Staging database permissions - we will use the user db. USE scratch; EXEC sp_addrolemember
’db_ddladmin’, user; Set the R environment variable DWLOADER_PATH to the location of the
binary.
PostgreSQL: Uses the ’psql’ executable to upload. Set the POSTGRES_PATH environment variable
to the Postgres binary path, e.g. ’C:/Program Files/PostgreSQL/11/bin’.
Examples
## Not run:
connectionDetails <- createConnectionDetails(
dbms = "mysql",
server = "localhost",
user = "root",
password = "blah"
)
conn <- connect(connectionDetails)
data <- data.frame(x = c(1, 2, 3), y = c("a", "b", "c"))
insertTable(conn, "my_schema", "my_table", data)
disconnect(conn)
## bulk data insert with Redshift or PDW
connectionDetails <- createConnectionDetails(
dbms = "redshift",
server = "localhost",
user = "root",
password = "blah",
schema = "cdm_v5"
)
conn <- connect(connectionDetails)
data <- data.frame(x = c(1, 2, 3), y = c("a", "b", "c"))
insertTable(
connection = connection,
databaseSchema = "scratch",
tableName = "somedata",
data = data,
dropTableIfExists = TRUE,
createTable = TRUE,
tempTable = FALSE,
bulkLoad = TRUE
) # or, Sys.setenv("DATABASE_CONNECTOR_BULK_UPLOAD" = TRUE)
## End(Not run)
isSqlReservedWord Test a character vector of SQL names for SQL reserved words
Description
This function checks a character vector against a predefined list of reserved SQL words.
Usage
isSqlReservedWord(sqlNames, warn = FALSE)
Arguments
sqlNames A character vector containing table or field names to check.
warn (logical) Should a warn be thrown if invalid SQL names are found?
Value
A logical vector with length equal to sqlNames that is TRUE for each name that is reserved and
FALSE otherwise
jdbcDrivers How to download and use JDBC drivers for the various data plat-
forms.
Description
Below are instructions for downloading JDBC drivers for the various data platforms. Once down-
loaded use the pathToDriver argument in the connect() or createConnectionDetails() func-
tions to point to the driver. Alternatively, you can set the ’DATABASECONNECTOR_JAR_FOLDER’
environmental variable, for example in your .Renviron file (recommended).
SQL Server, Oracle, PostgreSQL, PDW, Snowflake, Spark, RedShift, Azure Synapse, BigQuery
Use the downloadJdbcDrivers() function to download these drivers from the OHDSI GitHub
pages.
Netezza
Read the instructions here on how to obtain the Netezza JDBC driver.
Impala
Go to Cloudera’s site, pick your OS version, and click "GET IT NOW!’. Register, and you should
be able to download the driver.
SQLite
For SQLite we actually don’t use a JDBC driver. Instead, we use the RSQLite package, which can
be installed using install.packages("RSQLite").
lowLevelExecuteSql Execute SQL code
Description
This function executes a single SQL statement.
Usage
lowLevelExecuteSql(connection, sql)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
sql The SQL to be executed
lowLevelQuerySql Low level function for retrieving data to a data frame
Description
This is the equivalent of the querySql() function, except no error report is written when an error
occurs.
Usage
lowLevelQuerySql(
connection,
query,
datesAsString = FALSE,
integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE),
integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE)
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
query The SQL statement to retrieve the data
datesAsString Logical: Should dates be imported as character vectors, our should they be con-
verted to R’s date format?
integerAsNumeric
Logical: should 32-bit integers be converted to numeric (double) values? If
FALSE 32-bit integers will be represented using R’s native Integer class.
integer64AsNumeric
Logical: should 64-bit integers be converted to numeric (double) values? If
FALSE 64-bit integers will be represented using bit64::integer64.
Details
Retrieves data from the database server and stores it in a data frame. Null values in the database are
converted to NA values in R.
Value
A data frame containing the data retrieved from the server
lowLevelQuerySqlToAndromeda
Low level function for retrieving data to a local Andromeda object
Description
This is the equivalent of the querySqlToAndromeda() function, except no error report is written
when an error occurs.
Usage
lowLevelQuerySqlToAndromeda(
connection,
query,
andromeda,
andromedaTableName,
datesAsString = FALSE,
appendToTable = FALSE,
snakeCaseToCamelCase = FALSE,
integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE),
integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE)
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
query The SQL statement to retrieve the data
andromeda An open Andromeda object, for example as created using Andromeda::andromeda().
andromedaTableName
The name of the table in the local Andromeda object where the results of the
query will be stored.
datesAsString Should dates be imported as character vectors, our should they be converted to
R’s date format?
appendToTable If FALSE, any existing table in the Andromeda with the same name will be
replaced with the new data. If TRUE, data will be appended to an existing table,
assuming it has the exact same structure.
snakeCaseToCamelCase
If true, field names are assumed to use snake_case, and are converted to camel-
Case.
integerAsNumeric
Logical: should 32-bit integers be converted to numeric (double) values? If
FALSE 32-bit integers will be represented using R’s native Integer class.
integer64AsNumeric
Logical: should 64-bit integers be converted to numeric (double) values? If
FALSE 64-bit integers will be represented using bit64::integer64.
Details
Retrieves data from the database server and stores it in a local Andromeda object This allows very
large data sets to be retrieved without running out of memory. Null values in the database are
converted to NA values in R. If a table with the same name already exists in the local Andromeda
object it is replaced.
Value
Invisibly returns the andromeda. The Andromeda object will have a table added with the query
results.
month Extract the month from a date
Description
This function is provided primarily to be used together with dbplyr when querying a database. It
will also work in dplyr against data frames.
Usage
month(date)
Arguments
date The date.
Value
The month
Examples
month(as.Date("2000-02-01"))
querySql Retrieve data to a data.frame
Description
This function sends SQL to the server, and returns the results.
Usage
querySql(
connection,
sql,
errorReportFile = file.path(getwd(), "errorReportSql.txt"),
snakeCaseToCamelCase = FALSE,
integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE),
integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE)
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
sql The SQL to be send.
errorReportFile
The file where an error report will be written if an error occurs. Defaults to
’errorReportSql.txt’ in the current working directory.
snakeCaseToCamelCase
If true, field names are assumed to use snake_case, and are converted to camel-
Case.
integerAsNumeric
Logical: should 32-bit integers be converted to numeric (double) values? If
FALSE 32-bit integers will be represented using R’s native Integer class.
integer64AsNumeric
Logical: should 64-bit integers be converted to numeric (double) values? If
FALSE 64-bit integers will be represented using bit64::integer64.
Details
This function sends the SQL to the server and retrieves the results. If an error occurs during SQL
execution, this error is written to a file to facilitate debugging. Null values in the database are
converted to NA values in R.
Value
A data frame.
Examples
## Not run:
connectionDetails <- createConnectionDetails(
dbms = "postgresql",
server = "localhost",
user = "root",
password = "blah",
schema = "cdm_v4"
)
conn <- connect(connectionDetails)
count <- querySql(conn, "SELECT COUNT(*) FROM person")
disconnect(conn)
## End(Not run)
querySqlToAndromeda Retrieves data to a local Andromeda object
Description
This function sends SQL to the server, and returns the results in a local Andromeda object
Usage
querySqlToAndromeda(
connection,
sql,
andromeda,
andromedaTableName,
errorReportFile = file.path(getwd(), "errorReportSql.txt"),
snakeCaseToCamelCase = FALSE,
appendToTable = FALSE,
integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE),
integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE)
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
sql The SQL to be sent.
andromeda An open Andromeda object, for example as created using Andromeda::andromeda().
andromedaTableName
The name of the table in the local Andromeda object where the results of the
query will be stored.
errorReportFile
The file where an error report will be written if an error occurs. Defaults to
’errorReportSql.txt’ in the current working directory.
snakeCaseToCamelCase
If true, field names are assumed to use snake_case, and are converted to camel-
Case.
appendToTable If FALSE, any existing table in the Andromeda with the same name will be
replaced with the new data. If TRUE, data will be appended to an existing table,
assuming it has the exact same structure.
integerAsNumeric
Logical: should 32-bit integers be converted to numeric (double) values? If
FALSE 32-bit integers will be represented using R’s native Integer class.
integer64AsNumeric
Logical: should 64-bit integers be converted to numeric (double) values? If
FALSE 64-bit integers will be represented using bit64::integer64.
Details
Retrieves data from the database server and stores it in a local Andromeda object. This allows
very large data sets to be retrieved without running out of memory. If an error occurs during SQL
execution, this error is written to a file to facilitate debugging. Null values in the database are
converted to NA values in R.If a table with the same name already exists in the local Andromeda
object it is replaced.
Value
Invisibly returns the andromeda. The Andromeda object will have a table added with the query
results.
Examples
## Not run:
andromeda <- Andromeda::andromeda()
connectionDetails <- createConnectionDetails(
dbms = "postgresql",
server = "localhost",
user = "root",
password = "blah",
schema = "cdm_v4"
)
conn <- connect(connectionDetails)
querySqlToAndromeda(
connection = conn,
sql = "SELECT * FROM person;",
andromeda = andromeda,
andromedaTableName = "foo"
)
disconnect(conn)
andromeda$foo
## End(Not run)
renderTranslateExecuteSql
Render, translate, execute SQL code
Description
This function renders, translates, and executes SQL consisting of one or more statements.
Usage
renderTranslateExecuteSql(
connection,
sql,
profile = FALSE,
progressBar = TRUE,
reportOverallTime = TRUE,
errorReportFile = file.path(getwd(), "errorReportSql.txt"),
runAsBatch = FALSE,
oracleTempSchema = NULL,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
...
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
sql The SQL to be executed
profile When true, each separate statement is written to file prior to sending to the
server, and the time taken to execute a statement is displayed.
progressBar When true, a progress bar is shown based on the statements in the SQL code.
reportOverallTime
When true, the function will display the overall time taken to execute all state-
ments.
errorReportFile
The file where an error report will be written if an error occurs. Defaults to
’errorReportSql.txt’ in the current working directory.
runAsBatch When true the SQL statements are sent to the server as a single batch, and exe-
cuted there. This will be faster if you have many small SQL statements, but there
will be no progress bar, and no per-statement error messages. If the database
platform does not support batched updates the query is executed as ordinarily.
oracleTempSchema
DEPRECATED: use tempEmulationSchema instead.
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
... Parameters that will be used to render the SQL.
Details
This function calls the render and translate functions in the SqlRender package before calling
executeSql().
Examples
## Not run:
connectionDetails <- createConnectionDetails(
dbms = "postgresql",
server = "localhost",
user = "root",
password = "blah",
schema = "cdm_v4"
)
conn <- connect(connectionDetails)
renderTranslateExecuteSql(connection,
sql = "SELECT * INTO #temp FROM @schema.person;",
schema = "cdm_synpuf"
)
disconnect(conn)
## End(Not run)
renderTranslateQueryApplyBatched
Render, translate, and perform process to batches of data.
Description
This function renders, and translates SQL, sends it to the server, processes the data in batches with
a call back function. Note that this function should perform a row-wise operation. This is designed
to work with massive data that won’t fit in to memory.
The batch sizes are determined by the java virtual machine and will depend on the data.
Usage
renderTranslateQueryApplyBatched(
connection,
sql,
fun,
args = list(),
errorReportFile = file.path(getwd(), "errorReportSql.txt"),
snakeCaseToCamelCase = FALSE,
oracleTempSchema = NULL,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE),
integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE),
...
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
sql The SQL to be send.
fun Function to apply to batch. Must take data.frame and integer position as param-
eters.
args List of arguments to be passed to function call.
errorReportFile
The file where an error report will be written if an error occurs. Defaults to
’errorReportSql.txt’ in the current working directory.
snakeCaseToCamelCase
If true, field names are assumed to use snake_case, and are converted to camel-
Case.
oracleTempSchema
DEPRECATED: use tempEmulationSchema instead.
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
integerAsNumeric
Logical: should 32-bit integers be converted to numeric (double) values? If
FALSE 32-bit integers will be represented using R’s native Integer class.
integer64AsNumeric
Logical: should 64-bit integers be converted to numeric (double) values? If
FALSE 64-bit integers will be represented using bit64::integer64.
... Parameters that will be used to render the SQL.
Details
This function calls the render and translate functions in the SqlRender package before calling
querySql().
Value
Invisibly returns a list of outputs from each call to the provided function.
Examples
## Not run:
connectionDetails <- createConnectionDetails(
dbms = "postgresql",
server = "localhost",
user = "root",
password = "blah",
schema = "cdm_v4"
)
connection <- connect(connectionDetails)
# First example: write data to a large CSV file:
filepath <- "myBigFile.csv"
writeBatchesToCsv <- function(data, position, ...) {
write.csv(data, filepath, append = position != 1)
return(NULL)
}
renderTranslateQueryApplyBatched(connection,
"SELECT * FROM @schema.person;",
schema = "cdm_synpuf",
fun = writeBatchesToCsv
)
# Second example: write data to Andromeda
# (Alternative to querySqlToAndromeda if some local computation needs to be applied)
bigResults <- Andromeda::andromeda()
writeBatchesToAndromeda <- function(data, position, ...) {
data$p <- EmpiricalCalibration::computeTraditionalP(data$logRr, data$logSeRr)
if (position == 1) {
bigResults$rrs <- data
} else {
Andromeda::appendToTable(bigResults$rrs, data)
}
return(NULL)
}
sql <- "SELECT target_id, comparator_id, log_rr, log_se_rr FROM @schema.my_results;"
renderTranslateQueryApplyBatched(connection,
sql,
fun = writeBatchesToAndromeda,
schema = "my_results",
snakeCaseToCamelCase = TRUE
)
disconnect(connection)
## End(Not run)
renderTranslateQuerySql
Render, translate, and query to data.frame
Description
This function renders, and translates SQL, sends it to the server, and returns the results as a
data.frame.
Usage
renderTranslateQuerySql(
connection,
sql,
errorReportFile = file.path(getwd(), "errorReportSql.txt"),
snakeCaseToCamelCase = FALSE,
oracleTempSchema = NULL,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE),
integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE),
...
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
sql The SQL to be send.
errorReportFile
The file where an error report will be written if an error occurs. Defaults to
’errorReportSql.txt’ in the current working directory.
snakeCaseToCamelCase
If true, field names are assumed to use snake_case, and are converted to camel-
Case.
oracleTempSchema
DEPRECATED: use tempEmulationSchema instead.
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
integerAsNumeric
Logical: should 32-bit integers be converted to numeric (double) values? If
FALSE 32-bit integers will be represented using R’s native Integer class.
integer64AsNumeric
Logical: should 64-bit integers be converted to numeric (double) values? If
FALSE 64-bit integers will be represented using bit64::integer64.
... Parameters that will be used to render the SQL.
Details
This function calls the render and translate functions in the SqlRender package before calling
querySql().
Value
A data frame.
Examples
## Not run:
connectionDetails <- createConnectionDetails(
dbms = "postgresql",
server = "localhost",
user = "root",
password = "blah",
schema = "cdm_v4"
)
conn <- connect(connectionDetails)
persons <- renderTranslatequerySql(conn,
sql = "SELECT TOP 10 * FROM @schema.person",
schema = "cdm_synpuf"
)
disconnect(conn)
## End(Not run)
renderTranslateQuerySqlToAndromeda
Render, translate, and query to local Andromeda
Description
This function renders, and translates SQL, sends it to the server, and returns the results as an ffdf
object
Usage
renderTranslateQuerySqlToAndromeda(
connection,
sql,
andromeda,
andromedaTableName,
errorReportFile = file.path(getwd(), "errorReportSql.txt"),
snakeCaseToCamelCase = FALSE,
appendToTable = FALSE,
oracleTempSchema = NULL,
tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE),
integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE),
...
)
Arguments
connection The connection to the database server created using either connect() or dbConnect().
sql The SQL to be send.
andromeda An open Andromeda object, for example as created using Andromeda::andromeda().
andromedaTableName
The name of the table in the local Andromeda object where the results of the
query will be stored.
errorReportFile
The file where an error report will be written if an error occurs. Defaults to
’errorReportSql.txt’ in the current working directory.
snakeCaseToCamelCase
If true, field names are assumed to use snake_case, and are converted to camel-
Case.
appendToTable If FALSE, any existing table in the Andromeda with the same name will be
replaced with the new data. If TRUE, data will be appended to an existing table,
assuming it has the exact same structure.
oracleTempSchema
DEPRECATED: use tempEmulationSchema instead.
tempEmulationSchema
Some database platforms like Oracle and Impala do not truly support temp ta-
bles. To emulate temp tables, provide a schema with write privileges where
temp tables can be created.
integerAsNumeric
Logical: should 32-bit integers be converted to numeric (double) values? If
FALSE 32-bit integers will be represented using R’s native Integer class.
integer64AsNumeric
Logical: should 64-bit integers be converted to numeric (double) values? If
FALSE 64-bit integers will be represented using bit64::integer64.
... Parameters that will be used to render the SQL.
Details
This function calls the render and translate functions in the SqlRender package before calling
querySqlToAndromeda().
Value
Invisibly returns the andromeda. The Andromeda object will have a table added with the query
results.
Examples
## Not run:
connectionDetails <- createConnectionDetails(
dbms = "postgresql",
server = "localhost",
user = "root",
password = "blah",
schema = "cdm_v4"
)
conn <- connect(connectionDetails)
renderTranslatequerySqlToAndromeda(conn,
sql = "SELECT * FROM @schema.person",
schema = "cdm_synpuf",
andromeda = andromeda,
andromedaTableName = "foo"
)
disconnect(conn)
andromeda$foo
## End(Not run)
requiresTempEmulation Does the DBMS require temp table emulation?
Description
Does the DBMS require temp table emulation?
Usage
requiresTempEmulation(dbms)
Arguments
dbms The type of DBMS running on the server. See connect() or createConnectionDetails()
for valid values.
Value
TRUE if the DBMS requires temp table emulation, FALSE otherwise.
Examples
requiresTempEmulation("postgresql")
requiresTempEmulation("oracle")
year Extract the year from a date
Description
This function is provided primarily to be used together with dbplyr when querying a database. It
will also work in dplyr against data frames.
Usage
year(date)
Arguments
date The date.
Value
The year
Examples
year(as.Date("2000-02-01")) |
BrainCon | cran | R | Package ‘BrainCon’
May 22, 2023
Type Package
Title Inference the Partial Correlations Based on Time Series Data
Version 0.3.0
Author <NAME> [aut, cre],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Description A statistical tool to inference the multi-level partial correlations based on multi-
subject time series data, especially for brain functional connectivity. It combines both individ-
ual and population level inference by using the meth-
ods of Qiu and Zhou. (2021)<DOI:10.1080/01621459.2021.1917417> and Genovese and Wasser-
man. (2006)<DOI:10.1198/016214506000000339>. It realizes two reliable estimation meth-
ods of partial correlation coefficients, using scaled lasso and lasso. It can be used to estimate in-
dividual- or population-level partial correlations, identify nonzero ones, and find out unequal par-
tial correlation coefficients between two populations.
License GPL (>= 2)
Encoding UTF-8
LazyData true
RoxygenNote 7.1.1
Imports glmnet, MASS
Depends R (>= 2.10)
NeedsCompilation no
Repository CRAN
Date/Publication 2023-05-22 06:50:02 UTC
R topics documented:
individual.es... 2
individual.tes... 3
indsi... 5
popsim... 6
popsim... 7
population.es... 8
population.tes... 9
population.test.MinP... 10
population2sample.tes... 11
population2sample.test.MinP... 13
individual.est Estimate individual-level partial correlation coefficients
Description
Estimate individual-level partial correlation coefficients in time series data with 1 − α confidence
intervals. Note that these are confidence intervals for single parameters, not simultaneous confi-
dence intervals.
Usage
individual.est(
X,
lambda = NULL,
type = c("slasso", "lasso"),
alpha = 0.05,
ci = TRUE
)
Arguments
X time series data of an individual which is a n ∗ p numeric matrix, where n is the
number of periods of time and p is the number of variables.
lambda log(p)/n. If NULL, 2 ∗ 2.01/n ∗ log(p ∗ (log(p))1.5 /n0.5 )
p p
a penalty parameter of order p
is used in scaled lasso, and 2 ∗ log(p)/n is used in lasso. Increasing the
penalty parameter may lead to larger residuals in the node-wise regression, caus-
ing larger absolute values of estimates of partial correlation coefficients, which
may cause more false positives in subsequent tests.
type a character string representing the method of estimation. "slasso" means scaled
lasso, and "lasso" means lasso. Default value is "slasso".
alpha significance level, default value is 0.05.
ci a logical indicating whether to compute 1 − α confidence interval, default value
is TRUE.
Value
An indEst class object containing two or four components.
coef a p ∗ p partial correlation coefficients matrix.
ci.lower a p ∗ p numeric matrix containing the lower bound of 1 − α confidence interval, returned
if ci is TRUE.
ci.upper a p ∗ p numeric matrix containing the upper bound of 1 − α confidence interval, returned
if ci is TRUE.
asym.ex a matrix measuring the asymptotic expansion of estimates, which will be used for multiple
tests.
type regression type in estimation.
References
<NAME>. and <NAME>. (2021). Inference on multi-level partial correlations based on multi-subject
time series data, Journal of the American Statistical Association, 00, 1-15.
<NAME>. and <NAME>. (2012). Scaled Sparse Linear Regression, Biometrika, 99, 879–898.
<NAME>. (2013). Gaussian Graphical Model Estimation With False Discovery Rate Control, The
Annals of Statistics, 41, 2948–2978.
<NAME>., <NAME>., <NAME>. and <NAME>. (2015). Asymptotic Normality and Optimalities in Estimation
of Large Gaussian Graphical Models, The Annals of Statistics, 43, 991–1026.
See Also
population.est.
Examples
## Quick example for the individual-level estimates
data(indsim)
# estimating partial correlation coefficients by scaled lasso
pc = individual.est(indsim)
individual.test Identify nonzero individual-level partial correlations
Description
Identify nonzero individual-level partial correlations in time series data by controlling the rate of
the false discovery proportion (FDP) exceeding c0 at α, considering time dependence. Input an
indEst class object returned by individual.est or population.est.
Usage
individual.test(
indEst,
alpha = 0.05,
c0 = 0.1,
targetSet = NULL,
MBT = 3000,
simplify = !is.null(targetSet)
)
Arguments
indEst An indEst class object.
alpha significance level, default value is 0.05.
c0 threshold of the exceedance rate of FDP, default value is 0.1. The choice of
c0 depends on the empirical problem. A smaller value of c0 will reduce false
positives, but it may also cost more false negatives.
targetSet a two-column matrix. Each row contains two index corresponding to a pair of
variables of interest. If NULL, any pair of two variables is considered to be of
interest.
MBT times of multiplier bootstrap, default value is 3000.
simplify a logical indicating whether results should be simplified if possible.
Value
If simplify is FALSE, a p∗p matrix with values 0 or 1 is returned. If the j-th row and k-th column of
the matrix is 1, then the partial correlation coefficient between the j-th variable and the k-th variable
is identified to be nonzero.
And if simplify is TRUE, a two-column matrix is returned, indicating the row index and the column
index of recovered nonzero partial correlations. We only retain the results which the row index is
less than the column index. Those with larger test statistics are sorted first.
References
<NAME>. and <NAME>. (2021). Inference on multi-level partial correlations based on multi-subject
time series data, Journal of the American Statistical Association, 00, 1-15.
See Also
population.est for making inferences on one individual in the population.
Examples
## Quick example for the individual-level inference
data(indsim)
# estimating partial correlation coefficients by scaled lasso
pc = individual.est(indsim)
# conducting hypothesis test
Res = individual.test(pc)
indsim Simulation time series data for individual
Description
A dataset containing values of 10 interested variables over 50 periods.
Usage
indsim
Format
An object of class matrix (inherits from array) with 50 rows and 10 columns.
Examples
## Generated by the following R codes
set.seed(1000)
n = 50; p = 10
Precision = diag(rep(2, p)) # generate precision matrix
for (i in 1 : (p - 1)){
temp = ifelse(i > 2 * p / 3, 0.4, 1)
Precision[i, i + 1] = temp
Precision[i + 1, i] = temp
}
# R=-cov2cor(Precision) + diag(rep(2, p)) # real partial correlation matrix
Sigma = solve(Precision) # generate covariance matrix
rho = 0.5
y = matrix(0, n, p) # generate observed time series data
Epsilon = MASS::mvrnorm(n, rep(0, p), Sigma)
y[1, ] = Epsilon[1, ]
for (i in 2 : n){
y[i, ] = rho * y[i - 1, ] + sqrt(1 - rho^2) * Epsilon[i, ]
}
indsim = y
popsimA Simulation time series data for population A
Description
A dataset containing values of 10 interested variables of 20 subjects over 50 periods.
Usage
popsimA
Format
An object of class array of dimension 50 x 10 x 20.
See Also
popsimB.
Examples
## Generated by the following R codes
set.seed(1234)
n = 50; p = 10; m1 = 20; m2 = 10
Precision1 = Precision2 = diag(rep(1, p)) # generate Precision matrix for population
for (i in 1 : (p - 1)){
temp1 = ifelse(i > 2 * p / 3, -0.2, 0.4)
temp2 = ifelse(i < p / 3, 0.4, -0.2)
Precision1[i, i + 1] = Precision1[i + 1, i] = temp1
Precision2[i, i + 1] = Precision2[i + 1, i] = temp2
}
# R1=-cov2cor(Precision1) + diag(rep(2, p)) # real partial correlation matrix
# R2=-cov2cor(Precision2) + diag(rep(2, p))
Index = matrix(0, p, p) # generate covariance matrix for each subject
for (i in 1 : p){
for (j in 1 : p){
if (i != j & abs(i - j) <= 3) Index[i, j] = 1
}
}
SigmaAll1 = array(dim = c(p, p, m1))
SigmaAll2 = array(dim = c(p, p, m2))
for (sub in 1 : m1){
RE = matrix(rnorm(p^2, 0, sqrt(2) * 0.05), p, p) * Index
RE1 = (RE + t(RE)) / 2
PrecisionInd = Precision1 + RE1
SigmaAll1[, , sub] = solve(PrecisionInd)
}
for (sub in 1 : m2){
RE = matrix(rnorm(p^2, 0, sqrt(2) * 0.15), p, p) * Index
RE1 = (RE + t(RE)) / 2
PrecisionInd = Precision2 + RE1
SigmaAll2[, , sub] = solve(PrecisionInd)
}
rho = 0.3 # generate observed time series data
y1 = array(dim = c(n, p, m1))
y2 = array(dim = c(n, p, m2))
for (sub in 1 : m1){
SigmaInd1 = SigmaAll1[, , sub]
ytemp = matrix(0, n, p)
Epsilon = MASS::mvrnorm(n, rep(0, p), SigmaInd1)
ytemp[1, ] = Epsilon[1, ]
for (i in 2 : n){
ytemp[i, ] = rho * ytemp[i - 1, ] + sqrt(1 - rho^2) * Epsilon[i, ]
}
y1[, , sub] = ytemp
}
for (sub in 1 : m2){
SigmaInd2 = SigmaAll2[, , sub]
Xtemp = matrix(0, n, p)
Epsilon = MASS::mvrnorm(n, rep(0, p), SigmaInd2)
ytemp[1, ] = Epsilon[1, ]
for (i in 2 : n){
ytemp[i, ] = rho * ytemp[i - 1, ] + sqrt(1 - rho^2) * Epsilon[i, ]
}
y2[, , sub] = ytemp
}
popsimA = y1
popsimB = y2
popsimB Simulation time series data for population B
Description
A dataset containing values of 10 interested variables of 10 subjects over 50 periods.
Usage
popsimB
Format
An object of class array of dimension 50 x 10 x 10.
See Also
popsimA.
population.est Estimate population-level partial correlation coefficients
Description
Estimate population-level partial correlation coefficients in time series data. And also return coeffi-
cients for each individual. Input time series data for population as a 3-dimensional array or a list.
Usage
population.est(
Z,
lambda = NULL,
type = c("slasso", "lasso"),
alpha = 0.05,
ind.ci = FALSE
)
Arguments
Z If each individual shares the same number of periods of time, Z can be a n∗p∗m
dimensional array, where m is number of individuals. In general, Z should be a
m-length list, and each element in the list is a ni ∗ p matrix, where ni stands for
the number of periods of time of the i-th individual.
lambda
p
a scalar or a m-length vector, representing the penalty parameters of order log(p)/ni
for each individual. If a scalar, the penalty parameters used in each individual
are the same. If a m-length vector, the penalty parameters for each individual
are specified in order. And if NULL, penalty parameters are specified by type.
More details about the penalty parameters are in individual.est.
type a character string representing the method of estimation. "slasso" means scaled
lasso, and "lasso" means lasso. Default value is "slasso".
alpha a numeric scalar, default value is 0.05. It is used when ind.ci is TRUE.
ind.ci a logical indicating whether to compute 1 − α confidence intervals of each sub-
ject, default value is FALSE.
Value
A popEst class object containing two components.
coef a p ∗ p partial correlation coefficients matrix.
ind.est a m-length list, containing estimates for each individuals.
type regression type in estimation.
References
<NAME>. and <NAME>. (2021). Inference on multi-level partial correlations based on multi-subject
time series data, Journal of the American Statistical Association, 00, 1-15.
Examples
## Quick example for the population-level estimates
data(popsimA)
# estimating partial correlation coefficients by scaled lasso
pc = population.est(popsimA)
## Inference on the first subject in population
Res_1 = individual.test(pc$ind.est[[1]])
population.test The one-sample population inference
Description
Identify the nonzero partial correlations in one-sample population, based on controlling the rate
of the false discovery proportion (FDP) exceeding c0 at α, considering time dependence. Input a
popEst class object returned by population.est.
Usage
population.test(
popEst,
alpha = 0.05,
c0 = 0.1,
targetSet = NULL,
MBT = 5000,
simplify = !is.null(targetSet)
)
Arguments
popEst A popEst class object.
alpha significance level, default value is 0.05.
c0 threshold of the exceedance rate of FDP, default value is 0.1. A smaller value
of c0 will reduce false positives, but it may also cost more false negatives.
targetSet a two-column matrix. Each row contains two index corresponding to a pair of
variables of interest. If NULL, any pair of two variables is considered to be of
interest.
MBT times of multiplier bootstrap, default value is 5000.
simplify a logical indicating whether results should be simplified if possible.
Value
If simplify is FALSE, a p ∗ p matrix with values 0 or 1 is returned, and 1 means nonzero.
And if simplify is TRUE, a two-column matrix is returned, indicating the row index and the column
index of recovered nonzero partial correlations. We only retain the results which the row index is
less than the column index. Those with larger test statistics are sorted first.
References
<NAME>. and <NAME>. (2021). Inference on multi-level partial correlations based on multi-subject
time series data, Journal of the American Statistical Association, 00, 1-15.
See Also
individual.test.
Examples
## Quick example for the one-sample population inference
data(popsimA)
# estimating partial correlation coefficients by scaled lasso
pc = population.est(popsimA)
# conducting hypothesis test
Res = population.test(pc)
# conducting hypothesis test in variables of interest
set = cbind(rep(7:9, each = 10), 1:10)
Res_like = population.test(pc, targetSet = set)
population.test.MinPv The one-sample population inference using Genovese and Wasser-
man’s method
Description
Identify the nonzero partial correlations in one-sample population, based on controlling the rate of
the false discovery proportion (FDP) exceeding c0 at α. The method is based on the minimum of
the p-values. Input a popEst class object returned by population.est.
Usage
population.test.MinPv(
popEst,
alpha = 0.05,
c0 = 0.1,
targetSet = NULL,
simplify = !is.null(targetSet)
)
Arguments
popEst A popEst class object.
alpha significance level, default value is 0.05.
c0 threshold of the exceedance rate of FDP, default value is 0.1.
targetSet a two-column matrix. Each row contains two index corresponding to a pair of
variables of interest. If NULL, any pair of two variables is considered to be of
interest.
simplify a logical indicating whether results should be simplified if possible.
Value
If simplify is FALSE, a p ∗ p matrix with values 0 or 1 is returned, and 1 means nonzero.
And if simplify is TRUE, a two-column matrix is returned, indicating the row index and the column
index of recovered nonzero partial correlations. Those with lower p values are sorted first.
References
<NAME>. and <NAME>. (2006). Exceedance Control of the False Discovery Proportion,
Journal of the American Statistical Association, 101, 1408-1417.
<NAME>. and <NAME>. (2021). Inference on multi-level partial correlations based on multi-subject
time series data, Journal of the American Statistical Association, 00, 1-15.
See Also
population.test.
Examples
## Quick example for the one-sample population inference
data(popsimA)
# estimating partial correlation coefficients
pc = population.est(popsimA)
# conducting hypothesis test
Res = population.test.MinPv(pc)
population2sample.test
Identify differences of partial correlations between two populations
Description
Identify differences of partial correlations between two populations in two groups of time series
data by controlling the rate of the false discovery proportion (FDP) exceeding c0 at α, considering
time dependence. Input two popEst class objects returned by population.est (the number of in-
dividuals in two groups can be different).
Usage
population2sample.test(
popEst1,
popEst2,
alpha = 0.05,
c0 = 0.1,
targetSet = NULL,
MBT = 5000,
simplify = !is.null(targetSet)
)
Arguments
popEst1 A popEst class object.
popEst2 A popEst class object.
alpha significance level, default value is 0.05.
c0 threshold of the exceedance rate of FDP, default value is 0.1. A smaller value
of c0 will reduce false positives, but it may also cost more false negatives.
targetSet a two-column matrix. Each row contains two index corresponding to a pair of
variables of interest. If NULL, any pair of two variables is considered to be of
interest.
MBT times of multiplier bootstrap, default value is 5000.
simplify a logical indicating whether results should be simplified if possible.
Value
If simplify is FALSE, a p ∗ p matrix with values 0 or 1 is returned. If the j-th row and k-th column
of the matrix is 1, then the partial correlation coefficients between the j-th variable and the k-th
variable in two populations are identified to be unequal.
And if simplify is TRUE, a two-column matrix is returned, indicating the row index and the column
index of recovered unequal partial correlations. We only retain the results which the row index is
less than the column index. Those with larger test statistics are sorted first.
References
<NAME>. and <NAME>. (2021). Inference on multi-level partial correlations based on multi-subject
time series data, Journal of the American Statistical Association, 00, 1-15.
Examples
## Quick example for the two-sample case inference
data(popsimA)
data(popsimB)
# estimating partial correlation coefficients by lasso (scaled lasso does the same)
pc1 = population.est(popsimA, type = 'l')
pc2 = population.est(popsimB, type = 'l')
# conducting hypothesis test
Res = population2sample.test(pc1, pc2)
# conducting hypothesis test and returning simplified results
Res_s = population2sample.test(pc1, pc2, simplify = TRUE)
population2sample.test.MinPv
Identify differences of partial correlations between two populations
using Genovese and Wasserman’s method
Description
Identify differences of partial correlations between two populations in two groups of time series
data, based on controlling the rate of the false discovery proportion (FDP) exceeding c0 at α. The
method is based on the minimum of the p-values. Input two popEst class objects returned by
population.est (the number of individuals in two groups can be different).
Usage
population2sample.test.MinPv(
popEst1,
popEst2,
alpha = 0.05,
c0 = 0.1,
targetSet = NULL,
simplify = !is.null(targetSet)
)
Arguments
popEst1 A popEst class object.
popEst2 A popEst class object.
alpha significance level, default value is 0.05.
c0 threshold of the exceedance rate of FDP, default value is 0.1.
targetSet a two-column matrix. Each row contains two index corresponding to a pair of
variables of interest. If NULL, any pair of two variables is considered to be of
interest.
simplify a logical indicating whether results should be simplified if possible.
Value
If simplify is FALSE, a p ∗ p matrix with values 0 or 1 is returned, and 1 means unequal.
And if simplify is TRUE, a two-column matrix is returned, indicating the row index and the column
index of recovered unequal partial correlations. Those with lower p values are sorted first.
References
<NAME>., and <NAME>. (2006). Exceedance Control of the False Discovery Proportion,
Journal of the American Statistical Association, 101, 1408-1417
Qiu Y. and <NAME>. (2021). Inference on multi-level partial correlations based on multi-subject
time series data, Journal of the American Statistical Association, 00, 1-15.
Examples
## Quick example for the two-sample case inference
data(popsimA)
data(popsimB)
# estimating partial correlation coefficients by lasso (scaled lasso does the same)
pc1 = population.est(popsimA, type = 'l')
pc2 = population.est(popsimB, type = 'l')
# conducting hypothesis test
Res = population2sample.test.MinPv(pc1, pc2) |
passport | cran | R | Package ‘passport’
October 14, 2022
Type Package
Title Travel Smoothly Between Country Name and Code Formats
Version 0.3.0
Description Smooths the process of working with country names and codes via
powerful parsing, standardization, and conversion utilities arranged in a
simple, consistent API. Country name formats include multiple sources
including the Unicode Common Locale Data
Repository (CLDR, <http://cldr.unicode.org/>) common-sense standardized
names in hundreds of languages.
Depends R (>= 3.1.0)
Imports stats, utils
Suggests covr, dplyr, DT, gapminder, ggplot2, jsonlite, knitr, mockr,
rmarkdown, testthat, tidyr
License GPL-3 | file LICENSE
URL https://github.com/alistaire47/passport,
https://alistaire47.github.io/passport/
BugReports https://github.com/alistaire47/passport/issues
Encoding UTF-8
LazyData true
RoxygenNote 7.1.1
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-2811-6254>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2020-11-07 07:30:03 UTC
R topics documented:
as_country_cod... 2
as_country_nam... 3
code... 5
country_forma... 6
nat... 7
order_countrie... 8
parse_countr... 10
as_country_code Convert standardized country names to country codes
Description
as_country_code converts a vector of standardized country names or codes to country codes
Usage
as_country_code(x, from, to = "iso2c", factor = is.factor(x))
Arguments
x A character, factor, or numeric vector of country names or codes
from Format from which to convert. See Details for more options.
to Code format to which to convert. Defaults to "iso2c"; see codes for more
options.
factor If TRUE, returns factor instead of character vector.
Details
as_country_code takes a character, factor, or numeric vector of country names or codes to translate
into the specified code format. The default for to is "iso2c", the ISO 3166-1 Alpha-2 character
codes, but many alternatives are available.
Several non-unique codes are available as well, including "continent", "is_independent", ISO
4217 currency codes, etc. Backwards conversion will not work for such cases.
See codes for all options, or run DT::datatable(codes) for a searchable widget.
Value
A vector of country codes. Warns if new NA values are added.
See Also
For converting to country names, use as_country_name(), which offers control of short and variant
forms. For parsing non-standardized country names to codes, use parse_country().
Examples
# Codifies standardized names
as_country_code(c("US", "Taiwan", "Myanmar", "Kosovo", "South Korea"), from = "en")
# Translates codes; if passed a factor, returns a releveled one
as_country_code(factor(c("SAH", "PCN", "OMA", "JPN")),
from = "fifa", to = "iso4217_3c")
as_country_name Convert standardized country codes to country names
Description
as_country_name converts a vector of standardized country codes to country names.
Usage
as_country_name(
x,
to = "en",
from = "iso2c",
short = TRUE,
variant = FALSE,
factor = is.factor(x)
)
Arguments
x A character, factor, or numeric vector of country codes or names
to Language code of country names desired. Defaults to "en"; see codes for more
options.
from Code format from which to convert. Defaults to "iso2c"; see codes for more
options.
short Whether to use short alternative name when available. Can be length 1 or the
same length as x.
variant Whether to use variant alternative name when available. Can be length 1 or the
same length as x.
factor If TRUE, returns factor instead of character vector. If not supplied, defaults to
is.factor(x)
Details
as_country_name takes a character, factor, or numeric vector of country codes (or names in another
standardized format) and converts them to country names in the specified format. If you are trying
to standardize an existing set of names, see parse_country().
The default "en" is from Unicode Common Locale Data Repository (CLDR), which aspires to
use the most customary name e.g. "Switzerland" instead of official ones, which are frequently
awkward for common usage, e.g. "Swiss Confederation". CLDR also supplies names in a huge
variety of languages, allowing for easy translation. Short and variant alternates are available for
some countries; if not, the function will fall back to the standard form. See LICENSE file for terms
of use.
Other name sets are available from
• the UN Statistics Division(UNSD), which maintains standardized names in English, Chinese,
Russian, French, Spanish, and Arabic, here named as "en_un" etc.
• the ISO, "en_iso" and "fr_iso", and
• the CIA World Factbook:
– "en_cia", which include many longer official forms and shorter practical forms,
– "en_cia_local", which includes transliterations, and
– "en_cia_abbreviation", which includes commonly-used abbreviations.
See codes for all options, or run DT::datatable(codes) for a searchable widget.
Value
A character or factor vector of country names. Warns if new NA values are added.
See Also
For converting standardized names to codes, use as_country_code(). For standardizing names to
codes, use parse_country().
Examples
# Usable names for tough-to-standardize places
as_country_name(c("US", "TW", "MM", "XK", "KR"))
# If passed a factor, will return a releveled one
as_country_name(factor(c("US", "NF", "CD", "SJ")), short = FALSE, variant = TRUE)
# Speaks a lot of languages, knows a lot of codes
as_country_name(c("SAH", "PCN", "OMA", "JPN"), from = "fifa", to = "cy") # to Welsh
codes Country code and name details and documentation
Description
A codebook data.frame of codes and details for country code and name conversions available. Con-
tains Internet Engineering Task Force (IETF) language tags (e.g. "en-nz" for New Zealand En-
glish) for Unicode Common Locale Data Repository (CLDR) names, similar approximations for
institutional names (e.g. "en-iso"), and short names (e.g. "iso2c") for country codes.
Usage
codes
Format
A data.frame of 427 rows and 9 variables.
Variables:
column The column name in the internal passport:::countries data.frame. Valid for use in
from and to parameters.
code column with hyphens for underscores, which is a valid IANA language tag for Unicode CLDR
country names. Valid for use in from and to parameters.
name Full name or code name for non-CLDR options.
notes Things to note, including deprecations, oddities, etc.
language Full language name parsed from code.
region Full country or region name parsed from code.
script Full language script name parsed from code.
variant Full variant parsed from code. Also used for organization-standardized names.
extension Further specification of name type.
Details
All functions can accept codes separated with underscores _, hyphens -, or periods ..
Examples
# A searchable widget to find a code or name
if (requireNamespace("DT", quietly = TRUE)) {
DT::datatable(codes)
}
country_format Construct formatter function to format country codes as country
names
Description
country_format is a constructor function that returns a function to format country codes as country
names suitable for passing to ggplot2’s scale functions’ label parameters.
Usage
country_format(
from = "iso2c",
to = "en",
short = TRUE,
variant = FALSE,
factor
)
Arguments
from Code format from which to convert. Defaults to "iso2c"; see codes for more
options.
to Language code of country names desired. Defaults to "en"; see codes for more
options.
short Whether to use short alternative name when available. Can be length 1 or the
same length as x.
variant Whether to use variant alternative name when available. Can be length 1 or the
same length as x.
factor If TRUE, returns factor instead of character vector. If not supplied, defaults to
is.factor(x)
Details
A frequent reason to convert country codes back to country names is to make data visualizations
more readable. While both a code and name could be stored in a data frame, the computation and
extra storage required can be avoided by transforming codes to names directly within the visual-
ization via a formatter function. as_country_name() could be used without parentheses to format
ISO 2-character codes as English names, but format_country allows greater flexibility, returning
a formatter function with the specified parameters set.
Value
A function that accepts a vector of country codes and returns them as country names.
See Also
For controlling the order of a discrete scale, pass the results of order_countries() to limits.
Examples
if (require(ggplot2, quietly = TRUE)) {
ggplot(data.frame(country = c("KOR", "MMR", "TWN", "COG"),
y = 1:4),
aes(x = country, y = y)) +
geom_col() +
scale_x_discrete(labels = country_format(from = "iso3c"))
}
nato NATO Member Defense Expenditures
Description
A sample dataset of NATO/OTAN member defense expenditures.
Usage
nato
Format
A data.frame of 232 rows and 14 variables.
Variables:
country_stanag Country code in NATO STANAG format
year Year, from 2012 to 2019. 2018-2019 numbers may be estimates.
Defense expenditure (USD, current prices) Defense expenditures in US dollars, using cur-
rent prices and exchange rates.
Defense expenditure (USD, 2015 prices) Defense expenditures in US dollars, using 2015
prices and exchange rates.
Defense expenditure (% real GDP) Defense expenditure as a percentage of real gross domes-
tic product. Based on 2015 prices.
Defense expenditure annual real change (% GDP) Annual change in defense expenditure
as a percentage of real gross domestic product. Based on 2015 prices.
Real GDP (2015 prices) Real gross domestic product in 2015 US dollars and at 2015 exchange
rates.
GDP per capita (USD) Gross domestic product per capita in 2015 US dollars and at 2015 ex-
change rates.
Defense expenditure per capita (USD) Defense expenditure per capita in 2015 US dollars.
Military personnel Number of military personnel
Equipment expenditure (%) Percent of defense expenditure spent on equipment. Includes major
equipment expenditure and R&D devoted to major equipment.
Personnel expenditure (%) Percentage of defense expenditure spent on personnel. Includes
both military and civilian expenditure and pensions.
Infrastructure expenditure (%) Percentage of defense expenditure spent on infrastructure.
Includes NATO common infrastructure and national military construction.
Other expenditure (%) Percentage of defense expenditure spent on other categories besides
equipment, personnel, and infrastructure. Includes operations and maintenance expenditure,
other R&D expenditure, and other expenditure not otherwise captured.
Source
https://www.nato.int/cps/en/natohq/news_167080.htm
Examples
as_country_name(nato$country_stanag, from = 'stanag')
order_countries Order a vector of countries
Description
order_countries reorders a vector of countries, returning a result useful for passing to ggplot2’s
scale functions’ limits parameters.
Usage
order_countries(
x,
by,
...,
from = "iso2c",
short = TRUE,
variant = FALSE,
factor = is.factor(x)
)
Arguments
x A character, factor, or numeric vector of country codes or names
by Either a length-one country code from codes or a vector the same length as x by
which to order x
... Parameters passed on to order(), including addition vectors by which to sort,
decreasing, and na.last.
from Code format from which to convert. Defaults to "iso2c"; see codes for more
options.
short Whether to use short alternative name when available. Can be length 1 or the
same length as x.
variant Whether to use variant alternative name when available. Can be length 1 or the
same length as x.
factor If TRUE, returns factor instead of character vector. If not supplied, defaults to
is.factor(x)
Details
order_countries orders a vector of countries by
• itself converted to a country code or name if by is a code from codes to which to convert
• a sortable vector if by is a vector of the same length as x
• x itself if neither is supplied.
Value
The original vector of countries, ordered according to the parameters passed. Note that factors are
not releveled, but are reordered. To relevel, pass the results to levels<-()
See Also
To change labels of a discrete scale, pass the results of country_format() to the labels parameter.
Examples
countries <- c("FR", "CP", "UZ", "BH", "BR")
order_countries(countries)
order_countries(countries, "ja")
order_countries(countries, rnorm(5))
order_countries(countries, grepl("F", countries), 1:5, decreasing = TRUE)
if (require(ggplot2, quietly = TRUE)) {
df_countries <- data.frame(country = countries,
y = exp(1:5))
ggplot(df_countries, aes(country, y)) +
geom_col() +
scale_x_discrete(
limits = order_countries(df_countries$country,
df_countries$y)[df_countries$y > 5],
labels = country_format(to = "en-cia-local")
)
}
parse_country Parse country names to standardized form
Description
parse_country parses irregular country names to the ISO 3166-1 Alpha-2 code or other standard-
ized code or name format.
Usage
parse_country(
x,
to = "iso2c",
how = c("regex", "google"),
language = c("en", "de"),
factor = is.factor(x)
)
Arguments
x A character or factor vector of country names to standardize
to Format to which to convert. Defaults to "iso2c"; see codes for more options.
how How to parse; defaults to "regex". ‘"google"“ uses the Google Maps geocoding
API. See "Details" for more information.
language If how = "regex", the language from which to parse country names. Currently
accepts "en" (default) and "de". Ignored if how = "google".
factor If TRUE, returns factor instead of character vector. If not supplied, defaults to
is.factor(x)
Details
parse_country tries to parse a character or factor vector of country names to a standardized form:
by default, ISO 3166-1 Alpha-2 codes.
When how = "regex" (default), parse_country uses regular expressions to match irregular forms.
If regular expressions are insufficient, how = "google" will use the Google Maps geocoding API
instead, which permits a much broader range of input formats and languages. The API allows 2500
calls per day, and should thus be called judiciously. parse_country will make one call per unique
input. For more calls, see options that allow passing an API key like ggmap::geocode() with
output = "all" or googleway::google_geocode().
Note that due to their flexibility, the APIs may fail unpredictably, e.g. parse_country("foo", how
= "google") returns "CH" whereas how = "regex" fails with a graceful NA and warning.
Value
A character vector or factor of ISO 2-character country codes or other specified codes or names.
Warns of any parsing failure.
Examples
parse_country(c("United States", "USA", "U.S.", "us", "United States of America"))
## Not run:
# Unicode support for parsing accented or non-Latin scripts
parse_country(c("\u65e5\u672c", "Japon", "\u0698\u0627\u067e\u0646"), how = "google")
#> [1] "JP" "JP" "JP" "JP"
# Parse distinct place names via geocoding APIs
parse_country(c("1600 Pennsylvania Ave, DC", "Eiffel Tower"), how = "google")
#> [1] "US" "FR"
## End(Not run) |
subprocess | rust | Rust | Crate subprocess
===
Execution of and interaction with external processes and pipelines.
The entry points to the crate are the `Popen` struct and the `Exec`
builder. `Popen` is the interface to a running child process, inspired by Python’s `subprocess.Popen`. `Exec` provides a builder-pattern API with convenient methods for streaming and capturing of output, as well as combining `Popen` instances into pipelines.
Compared to `std::process`, the module follows the following additional features:
* The *communicate* family of methods for deadlock-free capturing of subprocess output/error, while simultaneously feeding data to its standard input. Capturing supports optional timeout and read size limit.
* Connecting multiple commands into OS-level pipelines.
* Flexible redirection options, such as connecting standard streams to arbitary open files, or merging output streams like shell’s `2>&1` and `1>&2` operators.
* Non-blocking and timeout methods to wait on the process:
`poll`, `wait`, and `wait_timeout`.
Examples
---
Communicate with a process and optionally terminate it:
```
let mut p = Popen::create(&["ps", "x"], PopenConfig {
stdout: Redirection::Pipe, ..Default::default()
})?;
// Obtain the output from the standard streams.
let (out, err) = p.communicate(None)?;
if let Some(exit_status) = p.poll() {
// the process has finished
} else {
// it is still running, terminate it
p.terminate()?;
}
```
Use the `Exec` builder to execute a pipeline of commands and capture the output:
```
let dir_checksum = {
Exec::shell("find . -type f") | Exec::cmd("sort") | Exec::cmd("sha1sum")
}.capture()?.stdout_str();
```
Modules
---
unixSubprocess extensions for Unix platforms.
Structs
---
CaptureDataData captured by `Exec::capture` and `Pipeline::capture`.
CommunicateErrorError during communication.
CommunicatorUnattended data exchange with the subprocess.
ExecA builder for `Popen` instances, providing control and convenience methods.
NullFileMarker value for `stdin`, `stdout`, and `stderr` methods of `Exec` and `Pipeline`.
PipelineA builder for multiple `Popen` instances connected via pipes.
PopenInterface to a running subprocess.
PopenConfigOptions for `Popen::create`.
Enums
---
ExitStatusExit status of a process.
PopenErrorError in `Popen` calls.
RedirectionInstruction what to do with a stream in the child process.
Functions
---
make_pipeCreate a pipe.
Type Definitions
---
ResultResult returned by calls in the `subprocess` crate in places where
`::std::io::Result` does not suffice.
Crate subprocess
===
Execution of and interaction with external processes and pipelines.
The entry points to the crate are the `Popen` struct and the `Exec`
builder. `Popen` is the interface to a running child process, inspired by Python’s `subprocess.Popen`. `Exec` provides a builder-pattern API with convenient methods for streaming and capturing of output, as well as combining `Popen` instances into pipelines.
Compared to `std::process`, the module follows the following additional features:
* The *communicate* family of methods for deadlock-free capturing of subprocess output/error, while simultaneously feeding data to its standard input. Capturing supports optional timeout and read size limit.
* Connecting multiple commands into OS-level pipelines.
* Flexible redirection options, such as connecting standard streams to arbitary open files, or merging output streams like shell’s `2>&1` and `1>&2` operators.
* Non-blocking and timeout methods to wait on the process:
`poll`, `wait`, and `wait_timeout`.
Examples
---
Communicate with a process and optionally terminate it:
```
let mut p = Popen::create(&["ps", "x"], PopenConfig {
stdout: Redirection::Pipe, ..Default::default()
})?;
// Obtain the output from the standard streams.
let (out, err) = p.communicate(None)?;
if let Some(exit_status) = p.poll() {
// the process has finished
} else {
// it is still running, terminate it
p.terminate()?;
}
```
Use the `Exec` builder to execute a pipeline of commands and capture the output:
```
let dir_checksum = {
Exec::shell("find . -type f") | Exec::cmd("sort") | Exec::cmd("sha1sum")
}.capture()?.stdout_str();
```
Modules
---
unixSubprocess extensions for Unix platforms.
Structs
---
CaptureDataData captured by `Exec::capture` and `Pipeline::capture`.
CommunicateErrorError during communication.
CommunicatorUnattended data exchange with the subprocess.
ExecA builder for `Popen` instances, providing control and convenience methods.
NullFileMarker value for `stdin`, `stdout`, and `stderr` methods of `Exec` and `Pipeline`.
PipelineA builder for multiple `Popen` instances connected via pipes.
PopenInterface to a running subprocess.
PopenConfigOptions for `Popen::create`.
Enums
---
ExitStatusExit status of a process.
PopenErrorError in `Popen` calls.
RedirectionInstruction what to do with a stream in the child process.
Functions
---
make_pipeCreate a pipe.
Type Definitions
---
ResultResult returned by calls in the `subprocess` crate in places where
`::std::io::Result` does not suffice.
Struct subprocess::Popen
===
```
pub struct Popen {
pub stdin: Option<File>,
pub stdout: Option<File>,
pub stderr: Option<File>,
/* private fields */
}
```
Interface to a running subprocess.
`Popen` is the parent’s interface to a created subprocess. The child process is started in the constructor, so owning a `Popen`
value indicates that the specified program has been successfully launched. To prevent accumulation of zombie processes, the child is waited upon when a `Popen` goes out of scope, which can be prevented using the `detach` method.
Depending on how the subprocess was configured, its input, output, and error streams can be connected to the parent and available as `stdin`,
`stdout`, and `stderr` public fields. If you need to read the output and errors into memory (or provide input as a memory slice), use the
`communicate` family of methods.
`Popen` instances can be obtained with the `create` method, or using the `popen` method of the `Exec` type. Subprocesses can be connected into pipes, most easily achieved using using
`Exec`.
Fields
---
`stdin: Option<File>`If `stdin` was specified as `Redirection::Pipe`, this will contain a writeble `File` connected to the standard input of the child process.
`stdout: Option<File>`If `stdout` was specified as `Redirection::Pipe`, this will contain a readable `File` connected to the standard output of the child process.
`stderr: Option<File>`If `stderr` was specified as `Redirection::Pipe`, this will contain a readable `File` connected to the standard error of the child process.
Implementations
---
source### impl Popen
source#### pub fn create(argv: &[impl AsRef<OsStr>], config: PopenConfig) -> Result<PopenExecute an external program in a new process.
`argv` is a slice containing the program followed by its arguments, such as `&["ps", "x"]`. `config` specifies details how to create and interface to the process.
For example, this launches the `cargo update` command:
```
Popen::create(&["cargo", "update"], PopenConfig::default())?;
```
##### Errors
If the external program cannot be executed for any reason, an error is returned. The most typical reason for execution to fail is that the program is missing on the `PATH`, but other errors are also possible. Note that this is distinct from the program running and then exiting with a failure code - this can be detected by calling the `wait` method to obtain its exit status.
source#### pub fn detach(&mut self)
Mark the process as detached.
This method has no effect on the OS level, it simply tells
`Popen` not to wait for the subprocess to finish when going out of scope. If the child process has already finished, or if it is guaranteed to finish before `Popen` goes out of scope, calling `detach` has no effect.
source#### pub fn pid(&self) -> Option<u32Return the PID of the subprocess, if it is known to be still running.
Note that this method won’t actually *check* whether the child process is still running, it will only return the information last set using one of `create`, `wait`, `wait_timeout`, or
`poll`. For a newly created `Popen`, `pid()` always returns
`Some`.
source#### pub fn exit_status(&self) -> Option<ExitStatusReturn the exit status of the subprocess, if it is known to have finished.
Note that this method won’t actually *check* whether the child process has finished, it only returns the previously available information. To check or wait for the process to finish, call
`wait`, `wait_timeout`, or `poll`.
source#### pub fn communicate_start(&mut self, input_data: Option<Vec<u8>>) -> Communicator
Prepare to communicate with the subprocess.
Communicating refers to unattended data exchange with the subprocess.
During communication the given `input_data` is written to the subprocess’s standard input which is then closed, while simultaneously its standard output and error streams are read until end-of-file is reached.
The difference between this and simply writing input data to
`self.stdin` and then reading output from `self.stdout` and
`self.stderr` is that the reading and the writing are performed simultaneously. A naive implementation that writes and then reads has an issue when the subprocess responds to part of the input by providing output. The output must be read for the subprocess to accept further input, but the parent process is still blocked on writing the rest of the input daata. Since neither process can proceed, a deadlock occurs. This is why a correct implementation must write and read at the same time.
This method does not perform the actual communication, it just sets it up and returns a `Communicator`. Call the `read` or
`read_string` method on the `Communicator` to exchange data with the subprocess.
Compared to `communicate()` and `communicate_bytes()`, the
`Communicator` provides more control, such as timeout, read size limit, and the ability to retrieve captured output in case of read error.
source#### pub fn communicate_bytes( &mut self, input_data: Option<&[u8]>) -> Result<(Option<Vec<u8>>, Option<Vec<u8>>)Feed the subprocess with input data and capture its output.
This will write the provided `input_data` to the subprocess’s standard input, and simultaneously read its standard output and error. The output and error contents are returned as a pair of `Option<Vec<u8>>`.
The `None` options correspond to streams not specified as
`Redirection::Pipe` when creating the subprocess.
This implementation reads and writes simultaneously, avoiding deadlock in case the subprocess starts writing output before reading the whole input - see `communicate_start()` for details.
Note that this method does not wait for the subprocess to finish, only to close its output/error streams. It is rare but possible for the program to continue running after having closed the streams, in which case `Popen::Drop` will wait for it to finish. If such a wait is undesirable, it can be prevented by waiting explicitly using `wait()`,
by detaching the process using `detach()`, or by terminating it with
`terminate()`.
For additional control over communication, such as timeout and size limit, call `communicate_start()`.
##### Panics
If `input_data` is provided and `stdin` was not redirected to a pipe.
Also, if `input_data` is not provided and `stdin` was redirected to a pipe.
##### Errors
* `Err(::std::io::Error)` if a system call fails
source#### pub fn communicate( &mut self, input_data: Option<&str>) -> Result<(Option<String>, Option<String>)Feed the subprocess with data and capture its output as string.
This is a convenience method equivalent to `communicate_bytes`, but with input as `&str` and output as `String`. Invalid UTF-8 sequences,
if found, are replaced with the the `U+FFFD` Unicode replacement character.
##### Panics
The same as with `communicate_bytes`.
##### Errors
* `Err(::std::io::Error)` if a system call fails
source#### pub fn poll(&mut self) -> Option<ExitStatusCheck whether the process is still running, without blocking or errors.
This checks whether the process is still running and if it is still running, `None` is returned, otherwise
`Some(exit_status)`. This method is guaranteed not to block and is exactly equivalent to
`wait_timeout(Duration::from_secs(0)).unwrap_or(None)`.
source#### pub fn wait(&mut self) -> Result<ExitStatusWait for the process to finish, and return its exit status.
If the process has already finished, it will exit immediately,
returning the exit status. Calling `wait` after that will return the cached exit status without executing any system calls.
##### Errors
Returns an `Err` if a system call fails in an unpredicted way.
This should not happen in normal usage.
source#### pub fn wait_timeout(&mut self, dur: Duration) -> Result<Option<ExitStatus>Wait for the process to finish, timing out after the specified duration.
This function behaves like `wait()`, except that the caller will be blocked for roughly no longer than `dur`. It returns
`Ok(None)` if the timeout is known to have elapsed.
On Unix-like systems, timeout is implemented by calling
`waitpid(..., WNOHANG)` in a loop with adaptive sleep intervals between iterations.
source#### pub fn terminate(&mut self) -> Result<()Terminate the subprocess.
On Unix-like systems, this sends the `SIGTERM` signal to the child process, which can be caught by the child in order to perform cleanup before exiting. On Windows, it is equivalent to `kill()`.
source#### pub fn kill(&mut self) -> Result<()Kill the subprocess.
On Unix-like systems, this sends the `SIGKILL` signal to the child process, which cannot be caught.
On Windows, it invokes `TerminateProcess` on the process handle with equivalent semantics.
Trait Implementations
---
source### impl Debug for Popen
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Drop for Popen
source#### fn drop(&mut self)
Executes the destructor for this type. Read more
source### impl PopenExt for Popen
source#### fn send_signal(&self, signal: i32) -> Result<()Send the specified signal to the child process. Read more
Auto Trait Implementations
---
### impl RefUnwindSafe for Popen
### impl Send for Popen
### impl Sync for Popen
### impl Unpin for Popen
### impl UnwindSafe for Popen
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct subprocess::Exec
===
```
pub struct Exec { /* private fields */ }
```
A builder for `Popen` instances, providing control and convenience methods.
`Exec` provides a builder API for `Popen::create`, and includes convenience methods for capturing the output, and for connecting subprocesses into pipelines.
Examples
---
Execute an external command and wait for it to complete:
```
let exit_status = Exec::cmd("umount").arg(dirname).join()?;
```
Execute the command using the OS shell, like C’s `system`:
```
Exec::shell("shutdown -h now").join()?;
```
Start a subprocess and obtain its output as a `Read` trait object,
like C’s `popen`:
```
let stream = Exec::cmd("ls").stream_stdout()?;
// call stream.read_to_string, construct io::BufReader(stream), etc.
```
Capture the output of a command:
```
let out = Exec::cmd("ls")
.stdout(Redirection::Pipe)
.capture()?
.stdout_str();
```
Redirect errors to standard output, and capture both in a single stream:
```
let out_and_err = Exec::cmd("ls")
.stdout(Redirection::Pipe)
.stderr(Redirection::Merge)
.capture()?
.stdout_str();
```
Provide input to the command and read its output:
```
let out = Exec::cmd("sort")
.stdin("b\nc\na\n")
.stdout(Redirection::Pipe)
.capture()?
.stdout_str();
assert!(out == "a\nb\nc\n");
```
Implementations
---
source### impl Exec
source#### pub fn cmd(command: impl AsRef<OsStr>) -> Exec
Constructs a new `Exec`, configured to run `command`.
The command will be run directly in the OS, without an intervening shell. To run it through a shell, use
`Exec::shell` instead.
By default, the command will be run without arguments, and none of the standard streams will be modified.
source#### pub fn shell(cmdstr: impl AsRef<OsStr>) -> Exec
Constructs a new `Exec`, configured to run `cmdstr` with the system shell.
`subprocess` never spawns shells without an explicit request. This command requests the shell to be used; on Unix-like systems, this is equivalent to
`Exec::cmd("sh").arg("-c").arg(cmdstr)`. On Windows, it runs `Exec::cmd("cmd.exe").arg("/c")`.
`shell` is useful for porting code that uses the C
`system` function, which also spawns a shell.
When invoking this function, be careful not to interpolate arguments into the string run by the shell, such as
`Exec::shell(format!("sort {}", filename))`. Such code is prone to errors and, if `filename` comes from an untrusted source, to shell injection attacks. Instead, use
`Exec::cmd("sort").arg(filename)`.
source#### pub fn arg(self, arg: impl AsRef<OsStr>) -> Exec
Appends `arg` to argument list.
source#### pub fn args(self, args: &[impl AsRef<OsStr>]) -> Exec
Extends the argument list with `args`.
source#### pub fn detached(self) -> Exec
Specifies that the process is initially detached.
A detached process means that we will not wait for the process to finish when the object that owns it goes out of scope.
source#### pub fn env_clear(self) -> Exec
Clears the environment of the subprocess.
When this is invoked, the subprocess will not inherit the environment of this process.
source#### pub fn env(self, key: impl AsRef<OsStr>, value: impl AsRef<OsStr>) -> Exec
Sets an environment variable in the child process.
If the same variable is set more than once, the last value is used.
Other environment variables are by default inherited from the current process. If this is undesirable, call
`env_clear` first.
source#### pub fn env_extend(self, vars: &[(impl AsRef<OsStr>, impl AsRef<OsStr>)]) -> Exec
Sets multiple environment variables in the child process.
The keys and values of the variables are specified by the slice. If the same variable is set more than once, the last value is used.
Other environment variables are by default inherited from the current process. If this is undesirable, call
`env_clear` first.
source#### pub fn env_remove(self, key: impl AsRef<OsStr>) -> Exec
Removes an environment variable from the child process.
Other environment variables are inherited by default.
source#### pub fn cwd(self, dir: impl AsRef<Path>) -> Exec
Specifies the current working directory of the child process.
If unspecified, the current working directory is inherited from the parent.
source#### pub fn stdin(self, stdin: impl Into<InputRedirection>) -> Exec
Specifies how to set up the standard input of the child process.
Argument can be:
* a `Redirection`;
* a `File`, which is a shorthand for `Redirection::File(file)`;
* a `Vec<u8>` or `&str`, which will set up a `Redirection::Pipe`
for stdin, making sure that `capture` feeds that data into the standard input of the subprocess;
* `NullFile`, which will redirect the standard input to read from
`/dev/null`.
source#### pub fn stdout(self, stdout: impl Into<OutputRedirection>) -> Exec
Specifies how to set up the standard output of the child process.
Argument can be:
* a `Redirection`;
* a `File`, which is a shorthand for `Redirection::File(file)`;
* `NullFile`, which will redirect the standard output to go to
`/dev/null`.
source#### pub fn stderr(self, stderr: impl Into<OutputRedirection>) -> Exec
Specifies how to set up the standard error of the child process.
Argument can be:
* a `Redirection`;
* a `File`, which is a shorthand for `Redirection::File(file)`;
* `NullFile`, which will redirect the standard error to go to
`/dev/null`.
source#### pub fn popen(self) -> PopenResult<PopenStarts the process, returning a `Popen` for the running process.
source#### pub fn join(self) -> PopenResult<ExitStatusStarts the process, waits for it to finish, and returns the exit status.
This method will wait for as long as necessary for the process to finish. If a timeout is needed, use
`<...>.detached().popen()?.wait_timeout(...)` instead.
source#### pub fn stream_stdout(self) -> PopenResult<impl ReadStarts the process and returns a value implementing the `Read`
trait that reads from the standard output of the child process.
This will automatically set up
`stdout(Redirection::Pipe)`, so it is not necessary to do that beforehand.
When the trait object is dropped, it will wait for the process to finish. If this is undesirable, use
`detached()`.
source#### pub fn stream_stderr(self) -> PopenResult<impl ReadStarts the process and returns a value implementing the `Read`
trait that reads from the standard error of the child process.
This will automatically set up
`stderr(Redirection::Pipe)`, so it is not necessary to do that beforehand.
When the trait object is dropped, it will wait for the process to finish. If this is undesirable, use
`detached()`.
source#### pub fn stream_stdin(self) -> PopenResult<impl WriteStarts the process and returns a value implementing the `Write`
trait that writes to the standard input of the child process.
This will automatically set up `stdin(Redirection::Pipe)`,
so it is not necessary to do that beforehand.
When the trait object is dropped, it will wait for the process to finish. If this is undesirable, use
`detached()`.
source#### pub fn communicate(self) -> PopenResult<CommunicatorStarts the process and returns a `Communicator` handle.
This is a lower-level API that offers more choice in how communication is performed, such as read size limit and timeout,
equivalent to `Popen::communicate`.
Unlike `capture()`, this method doesn’t wait for the process to finish, effectively detaching it.
source#### pub fn capture(self) -> PopenResult<CaptureDataStarts the process, collects its output, and waits for it to finish.
The return value provides the standard output and standard error as bytes or optionally strings, as well as the exit status.
Unlike `Popen::communicate`, this method actually waits for the process to finish, rather than simply waiting for its standard streams to close. If this is undesirable,
use `detached()`.
source#### pub fn to_cmdline_lossy(&self) -> String
Show Exec as command-line string quoted in the Unix style.
Trait Implementations
---
source### impl BitOr<Exec> for Exec
source#### fn bitor(self, rhs: Exec) -> Pipeline
Create a `Pipeline` from `self` and `rhs`.
#### type Output = Pipeline
The resulting type after applying the `|` operator.
source### impl BitOr<Exec> for Pipeline
source#### fn bitor(self, rhs: Exec) -> Pipeline
Append a command to the pipeline and return a new pipeline.
#### type Output = Pipeline
The resulting type after applying the `|` operator.
source### impl Clone for Exec
source#### fn clone(&self) -> Exec
Returns a copy of the value.
This method is guaranteed not to fail as long as none of the `Redirection` values contain a `Redirection::File`
variant. If a redirection to `File` is present, cloning that field will use `File::try_clone` method, which duplicates a file descriptor and can (but is not likely to) fail. In that scenario, `Exec::clone` panics.
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Exec
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
Auto Trait Implementations
---
### impl RefUnwindSafe for Exec
### impl !Send for Exec
### impl !Sync for Exec
### impl Unpin for Exec
### impl UnwindSafe for Exec
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct subprocess::Pipeline
===
```
pub struct Pipeline { /* private fields */ }
```
A builder for multiple `Popen` instances connected via pipes.
A pipeline is a sequence of two or more `Exec` commands connected via pipes. Just like in a Unix shell pipeline, each command receives standard input from the previous command, and passes standard output to the next command. Optionally, the standard input of the first command can be provided from the outside, and the output of the last command can be captured.
In most cases you do not need to create `Pipeline` instances directly; instead, combine `Exec` instances using the `|`
operator which produces `Pipeline`.
Examples
---
Execute a pipeline and return the exit status of the last command:
```
let exit_status =
(Exec::shell("ls *.bak") | Exec::cmd("xargs").arg("rm")).join()?;
```
Capture the pipeline’s output:
```
let dir_checksum = {
Exec::cmd("find . -type f") | Exec::cmd("sort") | Exec::cmd("sha1sum")
}.capture()?.stdout_str();
```
Implementations
---
source### impl Pipeline
source#### pub fn new(cmd1: Exec, cmd2: Exec) -> Pipeline
Creates a new pipeline by combining two commands.
Equivalent to `cmd1 | cmd2`.
source#### pub fn from_exec_iter<I>(iterable: I) -> Pipeline where I: IntoIterator<Item = Exec>,
Creates a new pipeline from a list of commands. Useful if a pipeline should be created dynamically.
Example:
```
use subprocess::Exec;
let commands = vec![
Exec::shell("echo tset"),
Exec::shell("tr '[:lower:]' '[:upper:]'"),
Exec::shell("rev")
];
let pipeline = subprocess::Pipeline::from_exec_iter(commands);
let output = pipeline.capture().unwrap().stdout_str();
assert_eq!(output, "TEST\n");
```
```
use subprocess::Exec;
let commands = vec![
Exec::shell("echo tset"),
];
// This will panic as the iterator contains less than two (2) items.
let pipeline = subprocess::Pipeline::from_exec_iter(commands);
```
Errors:
* Panics when the passed iterator contains less than two (2) items.
source#### pub fn stdin(self, stdin: impl Into<InputRedirection>) -> Pipeline
Specifies how to set up the standard input of the first command in the pipeline.
Argument can be:
* a `Redirection`;
* a `File`, which is a shorthand for `Redirection::File(file)`;
* a `Vec<u8>` or `&str`, which will set up a `Redirection::Pipe`
for stdin, making sure that `capture` feeds that data into the standard input of the subprocess.
* `NullFile`, which will redirect the standard input to read from
/dev/null.
source#### pub fn stdout(self, stdout: impl Into<OutputRedirection>) -> Pipeline
Specifies how to set up the standard output of the last command in the pipeline.
Argument can be:
* a `Redirection`;
* a `File`, which is a shorthand for `Redirection::File(file)`;
* `NullFile`, which will redirect the standard output to write to
/dev/null.
source#### pub fn stderr_to(self, to: File) -> Pipeline
Specifies a file to which to redirect the standard error of all the commands in the pipeline.
It is useful for capturing the standard error of the pipeline as a whole. Unlike `stdout()`, which only affects the last command in the pipeline, this affects all commands. The difference is because standard output is piped from one command to the next, so only the output of the last command is “free”. In contrast, the standard errors are not connected in any way. This is also the reason only a `File` is supported - it allows for efficient sharing of the same file by all commands.
source#### pub fn popen(self) -> PopenResult<Vec<Popen>Starts all commands in the pipeline, and returns a
`Vec<Popen>` whose members correspond to running commands.
If some command fails to start, the remaining commands will not be started, and the appropriate error will be returned. The commands that have already started will be waited to finish (but will probably exit immediately due to missing output), except for the ones for which
`detached()` was called. This is equivalent to what the shell does.
source#### pub fn join(self) -> PopenResult<ExitStatusStarts the pipeline, waits for it to finish, and returns the exit status of the last command.
source#### pub fn stream_stdout(self) -> PopenResult<impl ReadStarts the pipeline and returns a value implementing the `Read`
trait that reads from the standard output of the last command.
This will automatically set up
`stdout(Redirection::Pipe)`, so it is not necessary to do that beforehand.
When the trait object is dropped, it will wait for the pipeline to finish. If this is undesirable, use
`detached()`.
source#### pub fn stream_stdin(self) -> PopenResult<impl WriteStarts the pipeline and returns a value implementing the `Write`
trait that writes to the standard input of the last command.
This will automatically set up `stdin(Redirection::Pipe)`,
so it is not necessary to do that beforehand.
When the trait object is dropped, it will wait for the process to finish. If this is undesirable, use
`detached()`.
source#### pub fn communicate(self) -> PopenResult<CommunicatorStarts the pipeline and returns a `Communicator` handle.
This is a lower-level API that offers more choice in how communication is performed, such as read size limit and timeout,
equivalent to `Popen::communicate`.
Unlike `capture()`, this method doesn’t wait for the pipeline to finish, effectively detaching it.
source#### pub fn capture(self) -> PopenResult<CaptureDataStarts the pipeline, collects its output, and waits for all commands to finish.
The return value provides the standard output of the last command,
the combined standard error of all commands, and the exit status of the last command. The captured outputs can be accessed as bytes or strings.
Unlike `Popen::communicate`, this method actually waits for the processes to finish, rather than simply waiting for the output to close. If this is undesirable, use `detached()`.
Trait Implementations
---
source### impl BitOr<Exec> for Pipeline
source#### fn bitor(self, rhs: Exec) -> Pipeline
Append a command to the pipeline and return a new pipeline.
#### type Output = Pipeline
The resulting type after applying the `|` operator.
source### impl BitOr<Pipeline> for Pipeline
source#### fn bitor(self, rhs: Pipeline) -> Pipeline
Append a pipeline to the pipeline and return a new pipeline.
#### type Output = Pipeline
The resulting type after applying the `|` operator.
source### impl Clone for Pipeline
source#### fn clone(&self) -> Pipeline
Returns a copy of the value.
This method is guaranteed not to fail as long as none of the `Redirection` values contain a `Redirection::File`
variant. If a redirection to `File` is present, cloning that field will use `File::try_clone` method, which duplicates a file descriptor and can (but is not likely to) fail. In that scenario, `Exec::clone` panics.
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Pipeline
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
Auto Trait Implementations
---
### impl RefUnwindSafe for Pipeline
### impl !Send for Pipeline
### impl !Sync for Pipeline
### impl Unpin for Pipeline
### impl UnwindSafe for Pipeline
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum subprocess::Redirection
===
```
pub enum Redirection {
None,
Pipe,
Merge,
File(File),
RcFile(Rc<File>),
}
```
Instruction what to do with a stream in the child process.
`Redirection` values are used for the `stdin`, `stdout`, and
`stderr` field of the `PopenConfig` struct. They tell
`Popen::create` how to set up the standard streams in the child process and the corresponding fields of the `Popen` struct in the parent.
Variants
---
### `None`
Do nothing with the stream.
The stream is typically inherited from the parent. The field in `Popen` corresponding to the stream will be `None`.
### `Pipe`
Redirect the stream to a pipe.
This variant requests that a stream be redirected to a unidirectional pipe. One end of the pipe is passed to the child process and configured as one of its standard streams,
and the other end is available to the parent for communicating with the child.
The field with `Popen` corresponding to the stream will be
`Some(file)`, `File` being the parent’s end of the pipe.
### `Merge`
Merge the stream to the other output stream.
This variant is only valid when configuring redirection of standard output and standard error. Using
`Redirection::Merge` for `PopenConfig::stderr` requests the child’s stderr to refer to the same underlying file as the child’s stdout (which may or may not itself be redirected),
equivalent to the `2>&1` operator of the Bourne shell.
Analogously, using `Redirection::Merge` for
`PopenConfig::stdout` is equivalent to `1>&2` in the shell.
Specifying `Redirection::Merge` for `PopenConfig::stdin` or specifying it for both `stdout` and `stderr` is invalid and will cause `Popen::create` to return
`Err(PopenError::LogicError)`.
The field in `Popen` corresponding to the stream will be
`None`.
### `File(File)`
Redirect the stream to the specified open `File`.
This does not create a pipe, it simply spawns the child so that the specified stream sees that file. The child can read from or write to the provided file on its own, without any intervention by the parent.
The field in `Popen` corresponding to the stream will be
`None`.
### `RcFile(Rc<File>)`
Like `File`, but the file is specified as `Rc`.
This allows the same file to be used in multiple redirections.
Implementations
---
source### impl Redirection
source#### pub fn try_clone(&self) -> Result<RedirectionClone the underlying `Redirection`, or return an error.
Can fail in `File` variant.
Trait Implementations
---
source### impl Debug for Redirection
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
Auto Trait Implementations
---
### impl RefUnwindSafe for Redirection
### impl !Send for Redirection
### impl !Sync for Redirection
### impl Unpin for Redirection
### impl UnwindSafe for Redirection
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Module subprocess::unix
===
Subprocess extensions for Unix platforms.
Traits
---
PopenExtUnix-specific extension methods for `Popen`
Struct subprocess::CaptureData
===
```
pub struct CaptureData {
pub stdout: Vec<u8>,
pub stderr: Vec<u8>,
pub exit_status: ExitStatus,
}
```
Data captured by `Exec::capture` and `Pipeline::capture`.
Fields
---
`stdout: Vec<u8>`Standard output as bytes.
`stderr: Vec<u8>`Standard error as bytes.
`exit_status: ExitStatus`Exit status.
Implementations
---
source### impl CaptureData
source#### pub fn stdout_str(&self) -> String
Returns the standard output as string, converted from bytes using
`String::from_utf8_lossy`.
source#### pub fn stderr_str(&self) -> String
Returns the standard error as string, converted from bytes using
`String::from_utf8_lossy`.
source#### pub fn success(&self) -> bool
True if the exit status of the process or pipeline is 0.
Trait Implementations
---
source### impl Debug for CaptureData
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
Auto Trait Implementations
---
### impl RefUnwindSafe for CaptureData
### impl Send for CaptureData
### impl Sync for CaptureData
### impl Unpin for CaptureData
### impl UnwindSafe for CaptureData
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct subprocess::CommunicateError
===
```
pub struct CommunicateError {
pub error: Error,
pub capture: (Option<Vec<u8>>, Option<Vec<u8>>),
}
```
Error during communication.
It holds the underlying `io::Error` in the `error` field, and also provides the data captured before the error was encountered in the
`capture` field.
The error description and cause are taken from the underlying IO error.
Fields
---
`error: Error`The underlying `io::Error`.
`capture: (Option<Vec<u8>>, Option<Vec<u8>>)`The data captured before the error was encountered.
Implementations
---
source### impl CommunicateError
source#### pub fn kind(&self) -> ErrorKind
Returns the corresponding IO `ErrorKind` for this error.
Equivalent to `self.error.kind()`.
Trait Implementations
---
source### impl Debug for CommunicateError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CommunicateError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CommunicateError
source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl From<CommunicateError> for PopenError
source#### fn from(err: CommunicateError) -> PopenError
Converts to this type from the input type.
Auto Trait Implementations
---
### impl !RefUnwindSafe for CommunicateError
### impl Send for CommunicateError
### impl Sync for CommunicateError
### impl Unpin for CommunicateError
### impl !UnwindSafe for CommunicateError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct subprocess::Communicator
===
```
pub struct Communicator { /* private fields */ }
```
Unattended data exchange with the subprocess.
When a subprocess both expects input and provides output, care must be taken to avoid deadlock. The issue arises when the subprocess responds to part of the input data by providing some output which must be read for the subprocess to accept further input. If the parent process is blocked on writing the input, it cannot read the output and a deadlock occurs. This implementation avoids this issue by by reading from and writing to the subprocess in parallel. On Unix-like systems this is achieved using
`poll()`, and on Windows using threads.
Implementations
---
source### impl Communicator
source#### pub fn read( &mut self) -> Result<(Option<Vec<u8>>, Option<Vec<u8>>), CommunicateErrorCommunicate with the subprocess, return the contents of its standard output and error.
This will write input data to the subprocess’s standard input and simultaneously read its standard output and error. The output and error contents are returned as a pair of `Option<Vec>`. The `None`
options correspond to streams not specified as `Redirection::Pipe`
when creating the subprocess.
By default `read()` will read all data until end-of-file.
If `limit_time` has been called, the method will read for no more than the specified duration. In case of timeout, an error of kind
`io::ErrorKind::TimedOut` is returned. Communication may be resumed after the timeout by calling `read()` again.
If `limit_size` has been called, it will limit the allocation done by this method. If the subprocess provides more data than the limit specifies, `read()` will successfully return as much data as specified by the limit. (It might internally read a bit more from the subprocess, but the data will remain available for future reads.)
Subsequent data can be retrieved by calling `read()` again, which can be repeated until `read()` returns all-empty data, which marks EOF.
Note that this method does not wait for the subprocess to finish, only to close its output/error streams. It is rare but possible for the program to continue running after having closed the streams, in which case `Popen::Drop` will wait for it to finish. If such a wait is undesirable, it can be prevented by waiting explicitly using `wait()`,
by detaching the process using `detach()`, or by terminating it with
`terminate()`.
##### Panics
If `input_data` is provided and `stdin` was not redirected to a pipe.
Also, if `input_data` is not provided and `stdin` was redirected to a pipe.
##### Errors
* `Err(CommunicateError)` if a system call fails. In case of timeout,
the underlying error kind will be `ErrorKind::TimedOut`.
Regardless of the nature of the error, the content prior to the error can be retrieved using the `capture` attribute of the error.
source#### pub fn read_string( &mut self) -> Result<(Option<String>, Option<String>), CommunicateErrorReturn the subprocess’s output and error contents as strings.
Like `read()`, but returns strings instead of byte vectors. Invalid UTF-8 sequences, if found, are replaced with the the `U+FFFD` Unicode replacement character.
source#### pub fn limit_size(self, size: usize) -> Communicator
Limit the amount of data the next `read()` will read from the subprocess.
source#### pub fn limit_time(self, time: Duration) -> Communicator
Limit the amount of time the next `read()` will spend reading from the subprocess.
Trait Implementations
---
source### impl Debug for Communicator
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
Auto Trait Implementations
---
### impl RefUnwindSafe for Communicator
### impl Send for Communicator
### impl Sync for Communicator
### impl Unpin for Communicator
### impl UnwindSafe for Communicator
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct subprocess::NullFile
===
```
pub struct NullFile;
```
Marker value for `stdin`, `stdout`, and `stderr` methods of `Exec` and `Pipeline`.
Use of this value means that the corresponding stream should be redirected to the devnull device.
Trait Implementations
---
source### impl Debug for NullFile
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
Auto Trait Implementations
---
### impl RefUnwindSafe for NullFile
### impl Send for NullFile
### impl Sync for NullFile
### impl Unpin for NullFile
### impl UnwindSafe for NullFile
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct subprocess::PopenConfig
===
```
pub struct PopenConfig {
pub stdin: Redirection,
pub stdout: Redirection,
pub stderr: Redirection,
pub detached: bool,
pub executable: Option<OsString>,
pub env: Option<Vec<(OsString, OsString)>>,
pub cwd: Option<OsString>,
pub setuid: Option<u32>,
pub setgid: Option<u32>,
pub setpgid: bool,
/* private fields */
}
```
Options for `Popen::create`.
When constructing `PopenConfig`, always use the `Default` trait,
such as:
```
Popen::create(argv, PopenConfig {
stdout: Redirection::Pipe,
detached: true,
// ... other fields you want to override ...
..Default::default()
})
```
This ensures that fields added later do not break existing code.
An alternative to using `PopenConfig` directly is creating processes using `Exec`, a builder for `Popen`.
Fields
---
`stdin: Redirection`How to configure the executed program’s standard input.
`stdout: Redirection`How to configure the executed program’s standard output.
`stderr: Redirection`How to configure the executed program’s standard error.
`detached: bool`Whether the `Popen` instance is initially detached.
`executable: Option<OsString>`Executable to run.
If provided, this executable will be used to run the program instead of `argv[0]`. However, `argv[0]` will still be passed to the subprocess, which will see that as `argv[0]`. On some Unix systems, `ps` will show the string passed as `argv[0]`,
even though `executable` is actually running.
`env: Option<Vec<(OsString, OsString)>>`Environment variables to pass to the subprocess.
If this is None, environment variables are inherited from the calling process. Otherwise, the specified variables are used instead.
Duplicates are eliminated, with the value taken from the variable appearing later in the vector.
`cwd: Option<OsString>`Initial current working directory of the subprocess.
None means inherit the working directory from the parent.
`setuid: Option<u32>`Set user ID for the subprocess.
If specified, calls `setuid()` before execing the child process.
`setgid: Option<u32>`Set group ID for the subprocess.
If specified, calls `setgid()` before execing the child process.
Not to be confused with similarly named `setpgid`.
`setpgid: bool`Make the subprocess belong to a new process group.
If specified, calls `setpgid(0, 0)` before execing the child process.
Not to be confused with similarly named `setgid`.
Implementations
---
source### impl PopenConfig
source#### pub fn try_clone(&self) -> Result<PopenConfigClone the underlying `PopenConfig`, or return an error.
This is guaranteed not to fail as long as no
`Redirection::File` variant is used for one of the standard streams. Otherwise, it fails if `File::try_clone` fails on one of the `Redirection`s.
source#### pub fn current_env() -> Vec<(OsString, OsString)Returns the environment of the current process.
The returned value is in the format accepted by the `env`
member of `PopenConfig`.
Trait Implementations
---
source### impl Debug for PopenConfig
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PopenConfig
source#### fn default() -> PopenConfig
Returns the “default value” for a type. Read more
Auto Trait Implementations
---
### impl RefUnwindSafe for PopenConfig
### impl !Send for PopenConfig
### impl !Sync for PopenConfig
### impl Unpin for PopenConfig
### impl UnwindSafe for PopenConfig
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum subprocess::ExitStatus
===
```
pub enum ExitStatus {
Exited(u32),
Signaled(u8),
Other(i32),
Undetermined,
}
```
Exit status of a process.
Variants
---
### `Exited(u32)`
The process exited with the specified exit code.
Note that the exit code is limited to a much smaller range on most platforms.
### `Signaled(u8)`
The process exited due to a signal with the specified number.
This variant is never created on Windows, where signals of Unix kind do not exist.
### `Other(i32)`
The process exit status cannot be described by the preceding two variants.
This should not occur in normal operation.
### `Undetermined`
It is known that the process has completed, but its exit status is unavailable.
This should not occur in normal operation, but is possible if for example some foreign code calls `waitpid()` on the PID of the child process.
Implementations
---
source### impl ExitStatus
source#### pub fn success(self) -> bool
True if the exit status of the process is 0.
source#### pub fn is_killed_by<T: Eq + From<u8>>(self, signum: T) -> bool
True if the subprocess was killed by a signal with the specified number.
You can pass the concrete `libc` signal numbers to this function, such as
`status.is_killed_by(libc::SIGABRT)`.
Trait Implementations
---
source### impl Clone for ExitStatus
source#### fn clone(&self) -> ExitStatus
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ExitStatus
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl PartialEq<ExitStatus> for ExitStatus
source#### fn eq(&self, other: &ExitStatus) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ExitStatus) -> bool
This method tests for `!=`.
source### impl Copy for ExitStatus
source### impl Eq for ExitStatus
source### impl StructuralEq for ExitStatus
source### impl StructuralPartialEq for ExitStatus
Auto Trait Implementations
---
### impl RefUnwindSafe for ExitStatus
### impl Send for ExitStatus
### impl Sync for ExitStatus
### impl Unpin for ExitStatus
### impl UnwindSafe for ExitStatus
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum subprocess::PopenError
===
```
#[non_exhaustive]
pub enum PopenError {
IoError(Error),
LogicError(&'static str),
}
```
Error in `Popen` calls.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### `IoError(Error)`
An IO system call failed while executing the requested operation.
### `LogicError(&'static str)`
A logical error was made, e.g. invalid arguments detected at run-time.
Trait Implementations
---
source### impl Debug for PopenError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for PopenError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for PopenError
source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl From<CommunicateError> for PopenError
source#### fn from(err: CommunicateError) -> PopenError
Converts to this type from the input type.
source### impl From<Error> for PopenError
source#### fn from(err: Error) -> PopenError
Converts to this type from the input type.
Auto Trait Implementations
---
### impl !RefUnwindSafe for PopenError
### impl Send for PopenError
### impl Sync for PopenError
### impl Unpin for PopenError
### impl !UnwindSafe for PopenError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Function subprocess::make_pipe
===
```
pub fn make_pipe() -> Result<(File, File)>
```
Create a pipe.
This is a safe wrapper over `libc::pipe` or
`winapi::um::namedpipeapi::CreatePipe`, depending on the operating system.
Type Definition subprocess::Result
===
```
pub type Result<T> = Result<T, PopenError>;
```
Result returned by calls in the `subprocess` crate in places where
`::std::io::Result` does not suffice. |
github.com/mailru/dbr | go | Go | README
[¶](#section-readme)
---
dbr (fork of gocraft/dbr) provides additions to Go's database/sql for super fast performance and convenience.
[](https://travis-ci.org/mailru/dbr)
[](https://goreportcard.com/report/github.com/mailru/dbr)
[](https://coveralls.io/github/mailru/dbr?branch=develop)
### Getting Started
```
// create a connection (e.g. "postgres", "mysql", or "sqlite3")
conn, _ := dbr.Open("postgres", "...")
// create a session for each business unit of execution (e.g. a web request or goworkers job)
sess := conn.NewSession(nil)
// get a record var suggestion Suggestion sess.Select("id", "title").From("suggestions").Where("id = ?", 1).Load(&suggestion)
// JSON-ready, with dbr.Null* types serialized like you want json.Marshal(&suggestion)
```
### Feature highlights
#### Use a Sweet Query Builder or use Plain SQL
mailru/dbr supports both.
Sweet Query Builder:
```
stmt := dbr.Select("title", "body").
From("suggestions").
OrderBy("id").
Limit(10)
```
Plain SQL:
```
builder := dbr.SelectBySql("SELECT `title`, `body` FROM `suggestions` ORDER BY `id` ASC LIMIT 10")
```
#### Amazing instrumentation with session
All queries in mailru/dbr are made in the context of a session. This is because when instrumenting your app, it's important to understand which business action the query took place in.
Writing instrumented code is a first-class concern for mailru/dbr. We instrument each query to emit to a EventReceiver interface.
#### Faster performance than using database/sql directly
Every time you call database/sql's db.Query("SELECT ...") method, under the hood, the mysql driver will create a prepared statement, execute it, and then throw it away. This has a big performance cost.
mailru/dbr doesn't use prepared statements. We ported mysql's query escape functionality directly into our package, which means we interpolate all of those question marks with their arguments before they get to MySQL. The result of this is that it's way faster, and just as secure.
Check out these [benchmarks](https://github.com/tyler-smith/golang-sql-benchmark).
#### IN queries that aren't horrible
Traditionally, database/sql uses prepared statements, which means each argument in an IN clause needs its own question mark. mailru/dbr, on the other hand, handles interpolation itself so that you can easily use a single question mark paired with a dynamically sized slice.
```
ids := []int64{1, 2, 3, 4, 5}
builder.Where("id IN ?", ids) // `id` IN ?
```
map object can be used for IN queries as well.
Note: interpolation map is slower than slice and it is preferable to use slice when it is possible.
```
ids := map[int64]string{1: "one", 2: "two"}
builder.Where("id IN ?", ids) // `id` IN ?
```
#### JSON Friendly
Every try to JSON-encode a sql.NullString? You get:
```
{
"str1": {
"Valid": true,
"String": "Hi!"
},
"str2": {
"Valid": false,
"String": ""
}
}
```
Not quite what you want. mailru/dbr has dbr.NullString (and the rest of the Null* types) that encode correctly, giving you:
```
{
"str1": "Hi!",
"str2": null
}
```
#### Inserting multiple records
```
sess.InsertInto("suggestions").Columns("title", "body").
Record(suggestion1).
Record(suggestion2)
```
#### Updating records on conflict
```
stmt := sess.InsertInto("suggestions").Columns("title", "body").Record(suggestion1)
stmt.OnConflict("suggestions_pkey").Action("body", dbr.Proposed("body"))
```
#### Updating records
```
sess.Update("suggestions").
Set("title", "Gopher").
Set("body", "I love go.").
Where("id = ?", 1)
```
#### Transactions
```
tx, err := sess.Begin()
if err != nil {
return err
}
defer tx.RollbackUnlessCommitted()
// do stuff...
return tx.Commit()
```
#### Load database values to variables
Querying is the heart of mailru/dbr.
* Load(&any): load everything!
* LoadStruct(&oneStruct): load struct
* LoadStructs(&manyStructs): load a slice of structs
* LoadValue(&oneValue): load basic type
* LoadValues(&manyValues): load a slice of basic types
```
// columns are mapped by tag then by field type Suggestion struct {
ID int64 // id, will be autoloaded by last insert id
Title string // title
Url string `db:"-"` // ignored
secret string // ignored
Body dbr.NullString `db:"content"` // content
User User
}
// By default dbr converts CamelCase property names to snake_case column_names
// You can override this with struct tags, just like with JSON tags
// This is especially helpful while migrating from legacy systems type Suggestion struct {
Id int64
Title dbr.NullString `db:"subject"` // subjects are called titles now
CreatedAt dbr.NullTime
}
var suggestions []Suggestion sess.Select("*").From("suggestions").Load(&suggestions)
```
#### Join multiple tables
dbr supports many join types:
```
sess.Select("*").From("suggestions").
Join("subdomains", "suggestions.subdomain_id = subdomains.id")
sess.Select("*").From("suggestions").
LeftJoin("subdomains", "suggestions.subdomain_id = subdomains.id")
sess.Select("*").From("suggestions").
RightJoin("subdomains", "suggestions.subdomain_id = subdomains.id")
sess.Select("*").From("suggestions").
FullJoin("subdomains", "suggestions.subdomain_id = subdomains.id")
```
You can join on multiple tables:
```
sess.Select("*").From("suggestions").
Join("subdomains", "suggestions.subdomain_id = subdomains.id").
Join("accounts", "subdomains.accounts_id = accounts.id")
```
#### Quoting/escaping identifiers (e.g. table and column names)
```
dbr.I("suggestions.id") // `suggestions`.`id`
```
#### Subquery
```
sess.Select("count(id)").From(
dbr.Select("*").From("suggestions").As("count"),
)
```
#### Union
```
dbr.Union(
dbr.Select("*"),
dbr.Select("*"),
)
dbr.UnionAll(
dbr.Select("*"),
dbr.Select("*"),
)
```
Union can be used in subquery.
#### Alias/AS
* SelectStmt
```
dbr.Select("*").From("suggestions").As("count")
```
* Identity
```
dbr.I("suggestions").As("s")
```
* Union
```
dbr.Union(
dbr.Select("*"),
dbr.Select("*"),
).As("u1")
dbr.UnionAll(
dbr.Select("*"),
dbr.Select("*"),
).As("u2")
```
#### Building arbitrary condition
One common reason to use this is to prevent string concatenation in a loop.
* And
* Or
* Eq
* Neq
* Gt
* Gte
* Lt
* Lte
```
dbr.And(
dbr.Or(
dbr.Gt("created_at", "2015-09-10"),
dbr.Lte("created_at", "2015-09-11"),
),
dbr.Eq("title", "hello world"),
)
```
#### Built with extensibility
The core of dbr is interpolation, which can expand `?` with arbitrary SQL. If you need a feature that is not currently supported,
you can build it on your own (or use `dbr.Expr`).
To do that, the value that you wish to be expaned with `?` needs to implement `dbr.Builder`.
```
type Builder interface {
Build(Dialect, Buffer) error
}
```
### Driver support
* MySQL
* PostgreSQL
* SQLite3
* ClickHouse
These packages were developed by the [engineering team](https://eng.uservoice.com) at [UserVoice](https://www.uservoice.com) and currently power much of its infrastructure and tech stack.
### Thanks & Authors
Inspiration from these excellent libraries:
* [sqlx](https://github.com/jmoiron/sqlx) - various useful tools and utils for interacting with database/sql.
* [Squirrel](https://github.com/lann/squirrel) - simple fluent query builder.
Authors:
* <NAME> -- <https://github.com/cypriss>
* <NAME> -- <https://github.com/tyler-smith>
* <NAME> -- <https://github.com/taylorchu>
* Sponsored by [UserVoice](https://eng.uservoice.com)
Contributors:
* <NAME> -- <https://github.com/dinedal> - SQLite dialect
* <NAME> -- <https://github.com/bgaifullin> - ClickHouse dialect
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [Variables](#pkg-variables)
* [func InterpolateForDialect(query string, value []interface{}, d Dialect) (string, error)](#InterpolateForDialect)
* [func Load(rows *sql.Rows, value interface{}) (int, error)](#Load)
* [func Union(builder ...Builder) interface{ ... }](#Union)
* [func UnionAll(builder ...Builder) interface{ ... }](#UnionAll)
* [type Buffer](#Buffer)
* + [func NewBuffer() Buffer](#NewBuffer)
* [type BuildFunc](#BuildFunc)
* + [func (b BuildFunc) Build(d Dialect, buf Buffer) error](#BuildFunc.Build)
* [type Builder](#Builder)
* + [func And(cond ...Builder) Builder](#And)
+ [func Eq(column string, value interface{}) Builder](#Eq)
+ [func Expr(query string, value ...interface{}) Builder](#Expr)
+ [func Gt(column string, value interface{}) Builder](#Gt)
+ [func Gte(column string, value interface{}) Builder](#Gte)
+ [func Lt(column string, value interface{}) Builder](#Lt)
+ [func Lte(column string, value interface{}) Builder](#Lte)
+ [func Neq(column string, value interface{}) Builder](#Neq)
+ [func Or(cond ...Builder) Builder](#Or)
+ [func Proposed(column string) Builder](#Proposed)
* [type ConflictStmt](#ConflictStmt)
* [type Connection](#Connection)
* + [func Open(driver, dsn string, log EventReceiver) (*Connection, error)](#Open)
* + [func (conn *Connection) NewSession(log EventReceiver) *Session](#Connection.NewSession)
+ [func (conn *Connection) NewSessionContext(ctx context.Context, log EventReceiver) *Session](#Connection.NewSessionContext)
* [type DeleteBuilder](#DeleteBuilder)
* [type DeleteStmt](#DeleteStmt)
* + [func DeleteBySql(query string, value ...interface{}) DeleteStmt](#DeleteBySql)
+ [func DeleteFrom(table string) DeleteStmt](#DeleteFrom)
* [type Dialect](#Dialect)
* [type EventReceiver](#EventReceiver)
* [type Executer](#Executer)
* [type I](#I)
* + [func (i I) As(alias string) Builder](#I.As)
+ [func (i I) Build(d Dialect, buf Buffer) error](#I.Build)
* [type InsertBuilder](#InsertBuilder)
* [type InsertStmt](#InsertStmt)
* + [func InsertBySql(query string, value ...interface{}) InsertStmt](#InsertBySql)
+ [func InsertInto(table string) InsertStmt](#InsertInto)
* [type NullBool](#NullBool)
* + [func NewNullBool(v interface{}) (n NullBool)](#NewNullBool)
* + [func (n NullBool) MarshalJSON() ([]byte, error)](#NullBool.MarshalJSON)
+ [func (n *NullBool) UnmarshalJSON(b []byte) error](#NullBool.UnmarshalJSON)
* [type NullEventReceiver](#NullEventReceiver)
* + [func (n *NullEventReceiver) Event(eventName string)](#NullEventReceiver.Event)
+ [func (n *NullEventReceiver) EventErr(eventName string, err error) error](#NullEventReceiver.EventErr)
+ [func (n *NullEventReceiver) EventErrKv(eventName string, err error, kvs map[string]string) error](#NullEventReceiver.EventErrKv)
+ [func (n *NullEventReceiver) EventKv(eventName string, kvs map[string]string)](#NullEventReceiver.EventKv)
+ [func (n *NullEventReceiver) Timing(eventName string, nanoseconds int64)](#NullEventReceiver.Timing)
+ [func (n *NullEventReceiver) TimingKv(eventName string, nanoseconds int64, kvs map[string]string)](#NullEventReceiver.TimingKv)
* [type NullFloat64](#NullFloat64)
* + [func NewNullFloat64(v interface{}) (n NullFloat64)](#NewNullFloat64)
* + [func (n NullFloat64) MarshalJSON() ([]byte, error)](#NullFloat64.MarshalJSON)
+ [func (n *NullFloat64) UnmarshalJSON(b []byte) error](#NullFloat64.UnmarshalJSON)
* [type NullInt64](#NullInt64)
* + [func NewNullInt64(v interface{}) (n NullInt64)](#NewNullInt64)
* + [func (n NullInt64) MarshalJSON() ([]byte, error)](#NullInt64.MarshalJSON)
+ [func (n *NullInt64) UnmarshalJSON(b []byte) error](#NullInt64.UnmarshalJSON)
* [type NullString](#NullString)
* + [func NewNullString(v interface{}) (n NullString)](#NewNullString)
* + [func (n NullString) MarshalJSON() ([]byte, error)](#NullString.MarshalJSON)
+ [func (n *NullString) UnmarshalJSON(b []byte) error](#NullString.UnmarshalJSON)
* [type NullTime](#NullTime)
* + [func NewNullTime(v interface{}) (n NullTime)](#NewNullTime)
* + [func (n NullTime) MarshalJSON() ([]byte, error)](#NullTime.MarshalJSON)
+ [func (n *NullTime) Scan(value interface{}) error](#NullTime.Scan)
+ [func (n *NullTime) UnmarshalJSON(b []byte) error](#NullTime.UnmarshalJSON)
+ [func (n NullTime) Value() (driver.Value, error)](#NullTime.Value)
* [type SelectBuilder](#SelectBuilder)
* [type SelectStmt](#SelectStmt)
* + [func Select(column ...interface{}) SelectStmt](#Select)
+ [func SelectBySql(query string, value ...interface{}) SelectStmt](#SelectBySql)
* [type Session](#Session)
* + [func (sess *Session) Begin() (*Tx, error)](#Session.Begin)
+ [func (sess *Session) BeginWithOpts(opts *sql.TxOptions) (*Tx, error)](#Session.BeginWithOpts)
+ [func (sess *Session) DeleteBySql(query string, value ...interface{}) DeleteBuilder](#Session.DeleteBySql)
+ [func (sess *Session) DeleteFrom(table string) DeleteBuilder](#Session.DeleteFrom)
+ [func (sess *Session) Exec(query string, args ...interface{}) (sql.Result, error)](#Session.Exec)
+ [func (sess *Session) InsertBySql(query string, value ...interface{}) InsertBuilder](#Session.InsertBySql)
+ [func (sess *Session) InsertInto(table string) InsertBuilder](#Session.InsertInto)
+ [func (sess *Session) NewSession(log EventReceiver) *Session](#Session.NewSession)
+ [func (sess *Session) Query(query string, args ...interface{}) (*sql.Rows, error)](#Session.Query)
+ [func (sess *Session) Select(column ...string) SelectBuilder](#Session.Select)
+ [func (sess *Session) SelectBySql(query string, value ...interface{}) SelectBuilder](#Session.SelectBySql)
+ [func (sess *Session) Update(table string) UpdateBuilder](#Session.Update)
+ [func (sess *Session) UpdateBySql(query string, value ...interface{}) UpdateBuilder](#Session.UpdateBySql)
* [type SessionRunner](#SessionRunner)
* [type TracingEventReceiver](#TracingEventReceiver)
* [type Tx](#Tx)
* + [func (tx *Tx) Commit() error](#Tx.Commit)
+ [func (tx *Tx) DeleteBySql(query string, value ...interface{}) DeleteBuilder](#Tx.DeleteBySql)
+ [func (tx *Tx) DeleteFrom(table string) DeleteBuilder](#Tx.DeleteFrom)
+ [func (tx *Tx) Exec(query string, args ...interface{}) (sql.Result, error)](#Tx.Exec)
+ [func (tx *Tx) InsertBySql(query string, value ...interface{}) InsertBuilder](#Tx.InsertBySql)
+ [func (tx *Tx) InsertInto(table string) InsertBuilder](#Tx.InsertInto)
+ [func (tx *Tx) Query(query string, args ...interface{}) (*sql.Rows, error)](#Tx.Query)
+ [func (tx *Tx) Rollback() error](#Tx.Rollback)
+ [func (tx *Tx) RollbackUnlessCommitted()](#Tx.RollbackUnlessCommitted)
+ [func (tx *Tx) Select(column ...string) SelectBuilder](#Tx.Select)
+ [func (tx *Tx) SelectBySql(query string, value ...interface{}) SelectBuilder](#Tx.SelectBySql)
+ [func (tx *Tx) Update(table string) UpdateBuilder](#Tx.Update)
+ [func (tx *Tx) UpdateBySql(query string, value ...interface{}) UpdateBuilder](#Tx.UpdateBySql)
* [type UpdateBuilder](#UpdateBuilder)
* [type UpdateStmt](#UpdateStmt)
* + [func Update(table string) UpdateStmt](#Update)
+ [func UpdateBySql(query string, value ...interface{}) UpdateStmt](#UpdateBySql)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
```
var (
ErrNotFound = [errors](/errors).[New](/errors#New)("dbr: not found")
ErrNotSupported = [errors](/errors).[New](/errors#New)("dbr: not supported")
ErrTableNotSpecified = [errors](/errors).[New](/errors#New)("dbr: table not specified")
ErrColumnNotSpecified = [errors](/errors).[New](/errors#New)("dbr: column not specified")
ErrInvalidPointer = [errors](/errors).[New](/errors#New)("dbr: attempt to load into an invalid pointer")
ErrPlaceholderCount = [errors](/errors).[New](/errors#New)("dbr: wrong placeholder count")
ErrInvalidSliceLength = [errors](/errors).[New](/errors#New)("dbr: length of slice is 0. length must be >= 1")
ErrCantConvertToTime = [errors](/errors).[New](/errors#New)("dbr: can't convert to time.Time")
ErrInvalidTimestring = [errors](/errors).[New](/errors#New)("dbr: invalid time string")
ErrPrewhereNotSupported = [errors](/errors).[New](/errors#New)("dbr: PREWHERE statement is not supported")
)
```
package errors
```
var Now = nowSentinel{}
```
Now is a value that serializes to the current time
### Functions [¶](#pkg-functions)
####
func [InterpolateForDialect](https://github.com/mailru/dbr/blob/v1.2.0/interpolate.go#L21) [¶](#InterpolateForDialect)
added in v1.2.0
```
func InterpolateForDialect(query [string](/builtin#string), value []interface{}, d [Dialect](#Dialect)) ([string](/builtin#string), [error](/builtin#error))
```
InterpolateForDialect replaces placeholder in query with corresponding value in dialect
####
func [Load](https://github.com/mailru/dbr/blob/v1.2.0/load.go#L10) [¶](#Load)
added in v1.2.0
```
func Load(rows *[sql](/database/sql).[Rows](/database/sql#Rows), value interface{}) ([int](/builtin#int), [error](/builtin#error))
```
Load loads any value from sql.Rows
####
func [Union](https://github.com/mailru/dbr/blob/v1.2.0/union.go#L9) [¶](#Union)
added in v1.2.0
```
func Union(builder ...[Builder](#Builder)) interface {
[Builder](#Builder)
As([string](/builtin#string)) [Builder](#Builder)
}
```
Union builds "UNION ..."
####
func [UnionAll](https://github.com/mailru/dbr/blob/v1.2.0/union.go#L19) [¶](#UnionAll)
added in v1.2.0
```
func UnionAll(builder ...[Builder](#Builder)) interface {
[Builder](#Builder)
As([string](/builtin#string)) [Builder](#Builder)
}
```
UnionAll builds "UNION ALL ..."
### Types [¶](#pkg-types)
####
type [Buffer](https://github.com/mailru/dbr/blob/v1.2.0/buffer.go#L6) [¶](#Buffer)
added in v1.2.0
```
type Buffer interface {
WriteString(s [string](/builtin#string)) (n [int](/builtin#int), err [error](/builtin#error))
String() [string](/builtin#string)
WriteValue(v ...interface{}) (err [error](/builtin#error))
Value() []interface{}
}
```
Buffer is an interface used by Builder to store intermediate results
####
func [NewBuffer](https://github.com/mailru/dbr/blob/v1.2.0/buffer.go#L15) [¶](#NewBuffer)
added in v1.2.0
```
func NewBuffer() [Buffer](#Buffer)
```
NewBuffer creates buffer
####
type [BuildFunc](https://github.com/mailru/dbr/blob/v1.2.0/builder.go#L10) [¶](#BuildFunc)
added in v1.2.0
```
type BuildFunc func([Dialect](#Dialect), [Buffer](#Buffer)) [error](/builtin#error)
```
BuildFunc is an adapter to allow the use of ordinary functions as Builder
####
func (BuildFunc) [Build](https://github.com/mailru/dbr/blob/v1.2.0/builder.go#L13) [¶](#BuildFunc.Build)
added in v1.2.0
```
func (b [BuildFunc](#BuildFunc)) Build(d [Dialect](#Dialect), buf [Buffer](#Buffer)) [error](/builtin#error)
```
Build implements Builder interface
####
type [Builder](https://github.com/mailru/dbr/blob/v1.2.0/builder.go#L5) [¶](#Builder)
added in v1.2.0
```
type Builder interface {
Build([Dialect](#Dialect), [Buffer](#Buffer)) [error](/builtin#error)
}
```
Builder builds sql in one dialect like MySQL/PostgreSQL e.g. XxxBuilder
####
func [And](https://github.com/mailru/dbr/blob/v1.2.0/condition.go#L23) [¶](#And)
added in v1.2.0
```
func And(cond ...[Builder](#Builder)) [Builder](#Builder)
```
And creates AND from a list of conditions
####
func [Eq](https://github.com/mailru/dbr/blob/v1.2.0/condition.go#L51) [¶](#Eq)
```
func Eq(column [string](/builtin#string), value interface{}) [Builder](#Builder)
```
Eq is `=`.
When value is nil, it will be translated to `IS NULL`.
When value is a slice, it will be translated to `IN`.
Otherwise it will be translated to `=`.
####
func [Expr](https://github.com/mailru/dbr/blob/v1.2.0/expr.go#L9) [¶](#Expr)
```
func Expr(query [string](/builtin#string), value ...interface{}) [Builder](#Builder)
```
Expr should be used when sql syntax is not supported
####
func [Gt](https://github.com/mailru/dbr/blob/v1.2.0/condition.go#L94) [¶](#Gt)
added in v1.2.0
```
func Gt(column [string](/builtin#string), value interface{}) [Builder](#Builder)
```
Gt is `>`.
####
func [Gte](https://github.com/mailru/dbr/blob/v1.2.0/condition.go#L101) [¶](#Gte)
added in v1.2.0
```
func Gte(column [string](/builtin#string), value interface{}) [Builder](#Builder)
```
Gte is '>='.
####
func [Lt](https://github.com/mailru/dbr/blob/v1.2.0/condition.go#L108) [¶](#Lt)
added in v1.2.0
```
func Lt(column [string](/builtin#string), value interface{}) [Builder](#Builder)
```
Lt is '<'.
####
func [Lte](https://github.com/mailru/dbr/blob/v1.2.0/condition.go#L115) [¶](#Lte)
added in v1.2.0
```
func Lte(column [string](/builtin#string), value interface{}) [Builder](#Builder)
```
Lte is `<=`.
####
func [Neq](https://github.com/mailru/dbr/blob/v1.2.0/condition.go#L74) [¶](#Neq)
added in v1.2.0
```
func Neq(column [string](/builtin#string), value interface{}) [Builder](#Builder)
```
Neq is `!=`.
When value is nil, it will be translated to `IS NOT NULL`.
When value is a slice, it will be translated to `NOT IN`.
Otherwise it will be translated to `!=`.
####
func [Or](https://github.com/mailru/dbr/blob/v1.2.0/condition.go#L30) [¶](#Or)
added in v1.2.0
```
func Or(cond ...[Builder](#Builder)) [Builder](#Builder)
```
Or creates OR from a list of conditions
####
func [Proposed](https://github.com/mailru/dbr/blob/v1.2.0/insert.go#L46) [¶](#Proposed)
added in v1.2.0
```
func Proposed(column [string](/builtin#string)) [Builder](#Builder)
```
Proposed is reference to proposed value in on conflict clause
####
type [ConflictStmt](https://github.com/mailru/dbr/blob/v1.2.0/insert.go#L11) [¶](#ConflictStmt)
added in v1.2.0
```
type ConflictStmt interface {
Action(column [string](/builtin#string), action interface{}) [ConflictStmt](#ConflictStmt)
}
```
ConflictStmt is ` ON CONFLICT ...` part of InsertStmt
####
type [Connection](https://github.com/mailru/dbr/blob/v1.2.0/dbr.go#L44) [¶](#Connection)
```
type Connection struct {
*[sql](/database/sql).[DB](/database/sql#DB)
Dialect [Dialect](#Dialect)
[EventReceiver](#EventReceiver)
}
```
Connection is a connection to the database with an EventReceiver to send events, errors, and timings to
####
func [Open](https://github.com/mailru/dbr/blob/v1.2.0/dbr.go#L14) [¶](#Open)
added in v1.2.0
```
func Open(driver, dsn [string](/builtin#string), log [EventReceiver](#EventReceiver)) (*[Connection](#Connection), [error](/builtin#error))
```
Open instantiates a Connection for a given database/sql connection and event receiver
####
func (*Connection) [NewSession](https://github.com/mailru/dbr/blob/v1.2.0/dbr.go#L58) [¶](#Connection.NewSession)
```
func (conn *[Connection](#Connection)) NewSession(log [EventReceiver](#EventReceiver)) *[Session](#Session)
```
NewSession instantiates a Session for the Connection
####
func (*Connection) [NewSessionContext](https://github.com/mailru/dbr/blob/v1.2.0/dbr.go#L63) [¶](#Connection.NewSessionContext)
added in v1.2.0
```
func (conn *[Connection](#Connection)) NewSessionContext(ctx [context](/context).[Context](/context#Context), log [EventReceiver](#EventReceiver)) *[Session](#Session)
```
NewSessionContext instantiates a Session with context for the Connection
####
type [DeleteBuilder](https://github.com/mailru/dbr/blob/v1.2.0/delete_builder.go#L10) [¶](#DeleteBuilder)
```
type DeleteBuilder interface {
[Builder](#Builder)
[EventReceiver](#EventReceiver)
[Executer](#Executer)
Where(query interface{}, value ...interface{}) [DeleteBuilder](#DeleteBuilder)
Limit(n [uint64](/builtin#uint64)) [DeleteBuilder](#DeleteBuilder)
}
```
DeleteBuilder builds "DELETE ..." stmt
####
type [DeleteStmt](https://github.com/mailru/dbr/blob/v1.2.0/delete.go#L4) [¶](#DeleteStmt)
added in v1.2.0
```
type DeleteStmt interface {
[Builder](#Builder)
Where(query interface{}, value ...interface{}) [DeleteStmt](#DeleteStmt)
}
```
DeleteStmt builds `DELETE ...`
####
func [DeleteBySql](https://github.com/mailru/dbr/blob/v1.2.0/delete.go#L51) [¶](#DeleteBySql)
added in v1.2.0
```
func DeleteBySql(query [string](/builtin#string), value ...interface{}) [DeleteStmt](#DeleteStmt)
```
DeleteBySql creates a DeleteStmt from raw query
####
func [DeleteFrom](https://github.com/mailru/dbr/blob/v1.2.0/delete.go#L40) [¶](#DeleteFrom)
added in v1.2.0
```
func DeleteFrom(table [string](/builtin#string)) [DeleteStmt](#DeleteStmt)
```
DeleteFrom creates a DeleteStmt
####
type [Dialect](https://github.com/mailru/dbr/blob/v1.2.0/dialect.go#L6) [¶](#Dialect)
added in v1.2.0
```
type Dialect interface {
QuoteIdent(id [string](/builtin#string)) [string](/builtin#string)
EncodeString(s [string](/builtin#string)) [string](/builtin#string)
EncodeBool(b [bool](/builtin#bool)) [string](/builtin#string)
EncodeTime(t [time](/time).[Time](/time#Time)) [string](/builtin#string)
EncodeBytes(b [][byte](/builtin#byte)) [string](/builtin#string)
Placeholder(n [int](/builtin#int)) [string](/builtin#string)
OnConflict(constraint [string](/builtin#string)) [string](/builtin#string)
Proposed(column [string](/builtin#string)) [string](/builtin#string)
Limit(offset, limit [int64](/builtin#int64)) [string](/builtin#string)
Prewhere() [string](/builtin#string)
}
```
Dialect abstracts database differences
####
type [EventReceiver](https://github.com/mailru/dbr/blob/v1.2.0/event.go#L6) [¶](#EventReceiver)
```
type EventReceiver interface {
Event(eventName [string](/builtin#string))
EventKv(eventName [string](/builtin#string), kvs map[[string](/builtin#string)][string](/builtin#string))
EventErr(eventName [string](/builtin#string), err [error](/builtin#error)) [error](/builtin#error)
EventErrKv(eventName [string](/builtin#string), err [error](/builtin#error), kvs map[[string](/builtin#string)][string](/builtin#string)) [error](/builtin#error)
Timing(eventName [string](/builtin#string), nanoseconds [int64](/builtin#int64))
TimingKv(eventName [string](/builtin#string), nanoseconds [int64](/builtin#int64), kvs map[[string](/builtin#string)][string](/builtin#string))
}
```
EventReceiver gets events from dbr methods for logging purposes
####
type [Executer](https://github.com/mailru/dbr/blob/v1.2.0/dbr.go#L107) [¶](#Executer)
added in v1.2.0
```
type Executer interface {
Exec() ([sql](/database/sql).[Result](/database/sql#Result), [error](/builtin#error))
ExecContext(ctx [context](/context).[Context](/context#Context)) ([sql](/database/sql).[Result](/database/sql#Result), [error](/builtin#error))
}
```
Executer can execute requests to database
####
type [I](https://github.com/mailru/dbr/blob/v1.2.0/ident.go#L4) [¶](#I)
added in v1.2.0
```
type I [string](/builtin#string)
```
I is a identifier, which always will be quoted
####
func (I) [As](https://github.com/mailru/dbr/blob/v1.2.0/ident.go#L13) [¶](#I.As)
added in v1.2.0
```
func (i [I](#I)) As(alias [string](/builtin#string)) [Builder](#Builder)
```
As creates an alias for expr. e.g. SELECT `a1` AS `a2`
####
func (I) [Build](https://github.com/mailru/dbr/blob/v1.2.0/ident.go#L7) [¶](#I.Build)
added in v1.2.0
```
func (i [I](#I)) Build(d [Dialect](#Dialect), buf [Buffer](#Buffer)) [error](/builtin#error)
```
Build escapes identifier in Dialect
####
type [InsertBuilder](https://github.com/mailru/dbr/blob/v1.2.0/insert_builder.go#L10) [¶](#InsertBuilder)
```
type InsertBuilder interface {
[Builder](#Builder)
[EventReceiver](#EventReceiver)
[Executer](#Executer)
Columns(column ...[string](/builtin#string)) [InsertBuilder](#InsertBuilder)
Values(value ...interface{}) [InsertBuilder](#InsertBuilder)
Record(structValue interface{}) [InsertBuilder](#InsertBuilder)
OnConflictMap(constraint [string](/builtin#string), actions map[[string](/builtin#string)]interface{}) [InsertBuilder](#InsertBuilder)
OnConflict(constraint [string](/builtin#string)) [ConflictStmt](#ConflictStmt)
Pair(column [string](/builtin#string), value interface{}) [InsertBuilder](#InsertBuilder)
}
```
InsertBuilder builds "INSERT ..." stmt
####
type [InsertStmt](https://github.com/mailru/dbr/blob/v1.2.0/insert.go#L27) [¶](#InsertStmt)
added in v1.2.0
```
type InsertStmt interface {
[Builder](#Builder)
Columns(column ...[string](/builtin#string)) [InsertStmt](#InsertStmt)
Values(value ...interface{}) [InsertStmt](#InsertStmt)
Record(structValue interface{}) [InsertStmt](#InsertStmt)
OnConflictMap(constraint [string](/builtin#string), actions map[[string](/builtin#string)]interface{}) [InsertStmt](#InsertStmt)
OnConflict(constraint [string](/builtin#string)) [ConflictStmt](#ConflictStmt)
}
```
InsertStmt builds `INSERT INTO ...`
####
func [InsertBySql](https://github.com/mailru/dbr/blob/v1.2.0/insert.go#L131) [¶](#InsertBySql)
added in v1.2.0
```
func InsertBySql(query [string](/builtin#string), value ...interface{}) [InsertStmt](#InsertStmt)
```
InsertBySql creates an InsertStmt from raw query
####
func [InsertInto](https://github.com/mailru/dbr/blob/v1.2.0/insert.go#L120) [¶](#InsertInto)
added in v1.2.0
```
func InsertInto(table [string](/builtin#string)) [InsertStmt](#InsertStmt)
```
InsertInto creates an InsertStmt
####
type [NullBool](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L45) [¶](#NullBool)
```
type NullBool struct {
[sql](/database/sql).[NullBool](/database/sql#NullBool)
}
```
NullBool is a type that can be null or a bool
####
func [NewNullBool](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L166) [¶](#NewNullBool)
added in v1.2.0
```
func NewNullBool(v interface{}) (n [NullBool](#NullBool))
```
NewNullBool create a NullBool from v
####
func (NullBool) [MarshalJSON](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L84) [¶](#NullBool.MarshalJSON)
```
func (n [NullBool](#NullBool)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON correctly serializes a NullBool to JSON
####
func (*NullBool) [UnmarshalJSON](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L133) [¶](#NullBool.UnmarshalJSON)
```
func (n *[NullBool](#NullBool)) UnmarshalJSON(b [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON correctly deserializes a NullBool from JSON
####
type [NullEventReceiver](https://github.com/mailru/dbr/blob/v1.2.0/event.go#L28) [¶](#NullEventReceiver)
```
type NullEventReceiver struct{}
```
NullEventReceiver is a sentinel EventReceiver; use it if the caller doesn't supply one
####
func (*NullEventReceiver) [Event](https://github.com/mailru/dbr/blob/v1.2.0/event.go#L31) [¶](#NullEventReceiver.Event)
```
func (n *[NullEventReceiver](#NullEventReceiver)) Event(eventName [string](/builtin#string))
```
Event receives a simple notification when various events occur
####
func (*NullEventReceiver) [EventErr](https://github.com/mailru/dbr/blob/v1.2.0/event.go#L38) [¶](#NullEventReceiver.EventErr)
```
func (n *[NullEventReceiver](#NullEventReceiver)) EventErr(eventName [string](/builtin#string), err [error](/builtin#error)) [error](/builtin#error)
```
EventErr receives a notification of an error if one occurs
####
func (*NullEventReceiver) [EventErrKv](https://github.com/mailru/dbr/blob/v1.2.0/event.go#L42) [¶](#NullEventReceiver.EventErrKv)
```
func (n *[NullEventReceiver](#NullEventReceiver)) EventErrKv(eventName [string](/builtin#string), err [error](/builtin#error), kvs map[[string](/builtin#string)][string](/builtin#string)) [error](/builtin#error)
```
EventErrKv receives a notification of an error if one occurs along with optional key/value data
####
func (*NullEventReceiver) [EventKv](https://github.com/mailru/dbr/blob/v1.2.0/event.go#L35) [¶](#NullEventReceiver.EventKv)
```
func (n *[NullEventReceiver](#NullEventReceiver)) EventKv(eventName [string](/builtin#string), kvs map[[string](/builtin#string)][string](/builtin#string))
```
EventKv receives a notification when various events occur along with optional key/value data
####
func (*NullEventReceiver) [Timing](https://github.com/mailru/dbr/blob/v1.2.0/event.go#L47) [¶](#NullEventReceiver.Timing)
```
func (n *[NullEventReceiver](#NullEventReceiver)) Timing(eventName [string](/builtin#string), nanoseconds [int64](/builtin#int64))
```
Timing receives the time an event took to happen
####
func (*NullEventReceiver) [TimingKv](https://github.com/mailru/dbr/blob/v1.2.0/event.go#L50) [¶](#NullEventReceiver.TimingKv)
```
func (n *[NullEventReceiver](#NullEventReceiver)) TimingKv(eventName [string](/builtin#string), nanoseconds [int64](/builtin#int64), kvs map[[string](/builtin#string)][string](/builtin#string))
```
TimingKv receives the time an event took to happen along with optional key/value data
####
type [NullFloat64](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L21) [¶](#NullFloat64)
```
type NullFloat64 struct {
[sql](/database/sql).[NullFloat64](/database/sql#NullFloat64)
}
```
NullFloat64 is a type that can be null or a float64
####
func [NewNullFloat64](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L148) [¶](#NewNullFloat64)
added in v1.2.0
```
func NewNullFloat64(v interface{}) (n [NullFloat64](#NullFloat64))
```
NewNullFloat64 create a NullFloat64 from v
####
func (NullFloat64) [MarshalJSON](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L68) [¶](#NullFloat64.MarshalJSON)
```
func (n [NullFloat64](#NullFloat64)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON correctly serializes a NullFloat64 to JSON
####
func (*NullFloat64) [UnmarshalJSON](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L110) [¶](#NullFloat64.UnmarshalJSON)
```
func (n *[NullFloat64](#NullFloat64)) UnmarshalJSON(b [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON correctly deserializes a NullFloat64 from JSON
####
type [NullInt64](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L26) [¶](#NullInt64)
```
type NullInt64 struct {
[sql](/database/sql).[NullInt64](/database/sql#NullInt64)
}
```
NullInt64 is a type that can be null or an int
####
func [NewNullInt64](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L142) [¶](#NewNullInt64)
added in v1.2.0
```
func NewNullInt64(v interface{}) (n [NullInt64](#NullInt64))
```
NewNullInt64 create a NullInt64 from v
####
func (NullInt64) [MarshalJSON](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L60) [¶](#NullInt64.MarshalJSON)
```
func (n [NullInt64](#NullInt64)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON correctly serializes a NullInt64 to JSON
####
func (*NullInt64) [UnmarshalJSON](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L101) [¶](#NullInt64.UnmarshalJSON)
```
func (n *[NullInt64](#NullInt64)) UnmarshalJSON(b [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON correctly deserializes a NullInt64 from JSON
####
type [NullString](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L16) [¶](#NullString)
```
type NullString struct {
[sql](/database/sql).[NullString](/database/sql#NullString)
}
```
NullString is a type that can be null or a string
####
func [NewNullString](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L154) [¶](#NewNullString)
added in v1.2.0
```
func NewNullString(v interface{}) (n [NullString](#NullString))
```
NewNullString create a NullString from v
####
func (NullString) [MarshalJSON](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L52) [¶](#NullString.MarshalJSON)
```
func (n [NullString](#NullString)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON correctly serializes a NullString to JSON
####
func (*NullString) [UnmarshalJSON](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L92) [¶](#NullString.UnmarshalJSON)
```
func (n *[NullString](#NullString)) UnmarshalJSON(b [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON correctly deserializes a NullString from JSON
####
type [NullTime](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L31) [¶](#NullTime)
```
type NullTime struct {
Time [time](/time).[Time](/time#Time)
Valid [bool](/builtin#bool) // Valid is true if Time is not NULL
}
```
NullTime is a type that can be null or a time
####
func [NewNullTime](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L160) [¶](#NewNullTime)
added in v1.2.0
```
func NewNullTime(v interface{}) (n [NullTime](#NullTime))
```
NewNullTime create a NullTime from v
####
func (NullTime) [MarshalJSON](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L76) [¶](#NullTime.MarshalJSON)
```
func (n [NullTime](#NullTime)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON correctly serializes a NullTime to JSON
####
func (*NullTime) [Scan](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L194) [¶](#NullTime.Scan)
added in v1.2.0
```
func (n *[NullTime](#NullTime)) Scan(value interface{}) [error](/builtin#error)
```
Scan implements the Scanner interface.
The value type must be time.Time or string / []byte (formatted time-string),
otherwise Scan fails.
####
func (*NullTime) [UnmarshalJSON](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L119) [¶](#NullTime.UnmarshalJSON)
```
func (n *[NullTime](#NullTime)) UnmarshalJSON(b [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON correctly deserializes a NullTime from JSON
####
func (NullTime) [Value](https://github.com/mailru/dbr/blob/v1.2.0/types.go#L37) [¶](#NullTime.Value)
added in v1.2.0
```
func (n [NullTime](#NullTime)) Value() ([driver](/database/sql/driver).[Value](/database/sql/driver#Value), [error](/builtin#error))
```
Value implements the driver Valuer interface.
####
type [SelectBuilder](https://github.com/mailru/dbr/blob/v1.2.0/select_builder.go#L11) [¶](#SelectBuilder)
```
type SelectBuilder interface {
[Builder](#Builder)
[EventReceiver](#EventReceiver)
As(alias [string](/builtin#string)) [Builder](#Builder)
Comment(text [string](/builtin#string)) [SelectBuilder](#SelectBuilder)
Distinct() [SelectBuilder](#SelectBuilder)
ForUpdate() [SelectBuilder](#SelectBuilder)
From(table interface{}) [SelectBuilder](#SelectBuilder)
FullJoin(table, on interface{}) [SelectBuilder](#SelectBuilder)
GroupBy(col ...[string](/builtin#string)) [SelectBuilder](#SelectBuilder)
Having(query interface{}, value ...interface{}) [SelectBuilder](#SelectBuilder)
InTimezone(loc *[time](/time).[Location](/time#Location)) [SelectBuilder](#SelectBuilder)
Join(table, on interface{}) [SelectBuilder](#SelectBuilder)
LeftJoin(table, on interface{}) [SelectBuilder](#SelectBuilder)
Limit(n [uint64](/builtin#uint64)) [SelectBuilder](#SelectBuilder)
Offset(n [uint64](/builtin#uint64)) [SelectBuilder](#SelectBuilder)
OrderAsc(col [string](/builtin#string)) [SelectBuilder](#SelectBuilder)
OrderBy(col [string](/builtin#string)) [SelectBuilder](#SelectBuilder)
OrderDesc(col [string](/builtin#string)) [SelectBuilder](#SelectBuilder)
OrderDir(col [string](/builtin#string), isAsc [bool](/builtin#bool)) [SelectBuilder](#SelectBuilder)
Paginate(page, perPage [uint64](/builtin#uint64)) [SelectBuilder](#SelectBuilder)
Prewhere(query interface{}, value ...interface{}) [SelectBuilder](#SelectBuilder)
RightJoin(table, on interface{}) [SelectBuilder](#SelectBuilder)
SkipLocked() [SelectBuilder](#SelectBuilder)
Where(query interface{}, value ...interface{}) [SelectBuilder](#SelectBuilder)
GetRows() (*[sql](/database/sql).[Rows](/database/sql#Rows), [error](/builtin#error))
GetRowsContext([context](/context).[Context](/context#Context)) (*[sql](/database/sql).[Rows](/database/sql#Rows), [error](/builtin#error))
// contains filtered or unexported methods
}
```
SelectBuilder build "SELECT" stmt
####
type [SelectStmt](https://github.com/mailru/dbr/blob/v1.2.0/select.go#L4) [¶](#SelectStmt)
added in v1.2.0
```
type SelectStmt interface {
[Builder](#Builder)
From(table interface{}) [SelectStmt](#SelectStmt)
Distinct() [SelectStmt](#SelectStmt)
Prewhere(query interface{}, value ...interface{}) [SelectStmt](#SelectStmt)
Where(query interface{}, value ...interface{}) [SelectStmt](#SelectStmt)
Having(query interface{}, value ...interface{}) [SelectStmt](#SelectStmt)
GroupBy(col ...[string](/builtin#string)) [SelectStmt](#SelectStmt)
OrderAsc(col [string](/builtin#string)) [SelectStmt](#SelectStmt)
OrderDesc(col [string](/builtin#string)) [SelectStmt](#SelectStmt)
Limit(n [uint64](/builtin#uint64)) [SelectStmt](#SelectStmt)
Offset(n [uint64](/builtin#uint64)) [SelectStmt](#SelectStmt)
ForUpdate() [SelectStmt](#SelectStmt)
SkipLocked() [SelectStmt](#SelectStmt)
Join(table, on interface{}) [SelectStmt](#SelectStmt)
LeftJoin(table, on interface{}) [SelectStmt](#SelectStmt)
RightJoin(table, on interface{}) [SelectStmt](#SelectStmt)
FullJoin(table, on interface{}) [SelectStmt](#SelectStmt)
AddComment(text [string](/builtin#string)) [SelectStmt](#SelectStmt)
As(alias [string](/builtin#string)) [Builder](#Builder)
}
```
SelectStmt builds `SELECT ...`
####
func [Select](https://github.com/mailru/dbr/blob/v1.2.0/select.go#L182) [¶](#Select)
added in v1.2.0
```
func Select(column ...interface{}) [SelectStmt](#SelectStmt)
```
Select creates a SelectStmt
####
func [SelectBySql](https://github.com/mailru/dbr/blob/v1.2.0/select.go#L201) [¶](#SelectBySql)
added in v1.2.0
```
func SelectBySql(query [string](/builtin#string), value ...interface{}) [SelectStmt](#SelectStmt)
```
SelectBySql creates a SelectStmt from raw query
####
type [Session](https://github.com/mailru/dbr/blob/v1.2.0/dbr.go#L51) [¶](#Session)
```
type Session struct {
*[Connection](#Connection)
[EventReceiver](#EventReceiver)
// contains filtered or unexported fields
}
```
Session represents a business unit of execution for some connection
####
func (*Session) [Begin](https://github.com/mailru/dbr/blob/v1.2.0/transaction.go#L17) [¶](#Session.Begin)
```
func (sess *[Session](#Session)) Begin() (*[Tx](#Tx), [error](/builtin#error))
```
Begin creates a transaction for the given session
####
func (*Session) [BeginWithOpts](https://github.com/mailru/dbr/blob/v1.2.0/transaction.go#L22) [¶](#Session.BeginWithOpts)
added in v1.2.0
```
func (sess *[Session](#Session)) BeginWithOpts(opts *[sql](/database/sql).[TxOptions](/database/sql#TxOptions)) (*[Tx](#Tx), [error](/builtin#error))
```
BeginWithOpts creates a transaction for the given section with ability to set TxOpts
####
func (*Session) [DeleteBySql](https://github.com/mailru/dbr/blob/v1.2.0/delete_builder.go#L54) [¶](#Session.DeleteBySql)
added in v1.2.0
```
func (sess *[Session](#Session)) DeleteBySql(query [string](/builtin#string), value ...interface{}) [DeleteBuilder](#DeleteBuilder)
```
DeleteBySql creates a DeleteBuilder from raw query
####
func (*Session) [DeleteFrom](https://github.com/mailru/dbr/blob/v1.2.0/delete_builder.go#L30) [¶](#Session.DeleteFrom)
```
func (sess *[Session](#Session)) DeleteFrom(table [string](/builtin#string)) [DeleteBuilder](#DeleteBuilder)
```
DeleteFrom creates a DeleteBuilder
####
func (*Session) [Exec](https://github.com/mailru/dbr/blob/v1.2.0/dbr_go18.go#L11) [¶](#Session.Exec)
added in v1.2.0
```
func (sess *[Session](#Session)) Exec(query [string](/builtin#string), args ...interface{}) ([sql](/database/sql).[Result](/database/sql#Result), [error](/builtin#error))
```
Exec executes a query without returning any rows.
The args are for any placeholder parameters in the query.
####
func (*Session) [InsertBySql](https://github.com/mailru/dbr/blob/v1.2.0/insert_builder.go#L56) [¶](#Session.InsertBySql)
added in v1.2.0
```
func (sess *[Session](#Session)) InsertBySql(query [string](/builtin#string), value ...interface{}) [InsertBuilder](#InsertBuilder)
```
InsertBySql creates a InsertBuilder from raw query
####
func (*Session) [InsertInto](https://github.com/mailru/dbr/blob/v1.2.0/insert_builder.go#L34) [¶](#Session.InsertInto)
```
func (sess *[Session](#Session)) InsertInto(table [string](/builtin#string)) [InsertBuilder](#InsertBuilder)
```
InsertInto creates a InsertBuilder
####
func (*Session) [NewSession](https://github.com/mailru/dbr/blob/v1.2.0/dbr.go#L71) [¶](#Session.NewSession)
added in v1.2.0
```
func (sess *[Session](#Session)) NewSession(log [EventReceiver](#EventReceiver)) *[Session](#Session)
```
NewSession forks current session
####
func (*Session) [Query](https://github.com/mailru/dbr/blob/v1.2.0/dbr_go18.go#L17) [¶](#Session.Query)
added in v1.2.0
```
func (sess *[Session](#Session)) Query(query [string](/builtin#string), args ...interface{}) (*[sql](/database/sql).[Rows](/database/sql#Rows), [error](/builtin#error))
```
Query executes a query that returns rows, typically a SELECT.
The args are for any placeholder parameters in the query.
####
func (*Session) [Select](https://github.com/mailru/dbr/blob/v1.2.0/select_builder.go#L62) [¶](#Session.Select)
```
func (sess *[Session](#Session)) Select(column ...[string](/builtin#string)) [SelectBuilder](#SelectBuilder)
```
Select creates a SelectBuilder
####
func (*Session) [SelectBySql](https://github.com/mailru/dbr/blob/v1.2.0/select_builder.go#L84) [¶](#Session.SelectBySql)
```
func (sess *[Session](#Session)) SelectBySql(query [string](/builtin#string), value ...interface{}) [SelectBuilder](#SelectBuilder)
```
SelectBySql creates a SelectBuilder from raw query
####
func (*Session) [Update](https://github.com/mailru/dbr/blob/v1.2.0/update_builder.go#L32) [¶](#Session.Update)
```
func (sess *[Session](#Session)) Update(table [string](/builtin#string)) [UpdateBuilder](#UpdateBuilder)
```
Update creates a UpdateBuilder
####
func (*Session) [UpdateBySql](https://github.com/mailru/dbr/blob/v1.2.0/update_builder.go#L56) [¶](#Session.UpdateBySql)
```
func (sess *[Session](#Session)) UpdateBySql(query [string](/builtin#string), value ...interface{}) [UpdateBuilder](#UpdateBuilder)
```
UpdateBySql creates a UpdateBuilder from raw query
####
type [SessionRunner](https://github.com/mailru/dbr/blob/v1.2.0/dbr.go#L84) [¶](#SessionRunner)
```
type SessionRunner interface {
Select(column ...[string](/builtin#string)) [SelectBuilder](#SelectBuilder)
SelectBySql(query [string](/builtin#string), value ...interface{}) [SelectBuilder](#SelectBuilder)
InsertInto(table [string](/builtin#string)) [InsertBuilder](#InsertBuilder)
InsertBySql(query [string](/builtin#string), value ...interface{}) [InsertBuilder](#InsertBuilder)
Update(table [string](/builtin#string)) [UpdateBuilder](#UpdateBuilder)
UpdateBySql(query [string](/builtin#string), value ...interface{}) [UpdateBuilder](#UpdateBuilder)
DeleteFrom(table [string](/builtin#string)) [DeleteBuilder](#DeleteBuilder)
DeleteBySql(query [string](/builtin#string), value ...interface{}) [DeleteBuilder](#DeleteBuilder)
}
```
SessionRunner can do anything that a Session can except start a transaction.
####
type [TracingEventReceiver](https://github.com/mailru/dbr/blob/v1.2.0/event.go#L17) [¶](#TracingEventReceiver)
added in v1.2.0
```
type TracingEventReceiver interface {
SpanStart(ctx [context](/context).[Context](/context#Context), eventName, query [string](/builtin#string)) [context](/context).[Context](/context#Context)
SpanError(ctx [context](/context).[Context](/context#Context), err [error](/builtin#error))
SpanFinish(ctx [context](/context).[Context](/context#Context))
}
```
TracingEventReceiver is an optional interface an EventReceiver type can implement to allow tracing instrumentation
####
type [Tx](https://github.com/mailru/dbr/blob/v1.2.0/transaction.go#L9) [¶](#Tx)
```
type Tx struct {
[EventReceiver](#EventReceiver)
Dialect [Dialect](#Dialect)
*[sql](/database/sql).[Tx](/database/sql#Tx)
// contains filtered or unexported fields
}
```
Tx is a transaction for the given Session
####
func (*Tx) [Commit](https://github.com/mailru/dbr/blob/v1.2.0/transaction.go#L38) [¶](#Tx.Commit)
```
func (tx *[Tx](#Tx)) Commit() [error](/builtin#error)
```
Commit finishes the transaction
####
func (*Tx) [DeleteBySql](https://github.com/mailru/dbr/blob/v1.2.0/delete_builder.go#L66) [¶](#Tx.DeleteBySql)
added in v1.2.0
```
func (tx *[Tx](#Tx)) DeleteBySql(query [string](/builtin#string), value ...interface{}) [DeleteBuilder](#DeleteBuilder)
```
DeleteBySql creates a DeleteBuilder from raw query
####
func (*Tx) [DeleteFrom](https://github.com/mailru/dbr/blob/v1.2.0/delete_builder.go#L42) [¶](#Tx.DeleteFrom)
```
func (tx *[Tx](#Tx)) DeleteFrom(table [string](/builtin#string)) [DeleteBuilder](#DeleteBuilder)
```
DeleteFrom creates a DeleteBuilder
####
func (*Tx) [Exec](https://github.com/mailru/dbr/blob/v1.2.0/dbr_go18.go#L23) [¶](#Tx.Exec)
added in v1.2.0
```
func (tx *[Tx](#Tx)) Exec(query [string](/builtin#string), args ...interface{}) ([sql](/database/sql).[Result](/database/sql#Result), [error](/builtin#error))
```
Exec executes a query without returning any rows.
The args are for any placeholder parameters in the query.
####
func (*Tx) [InsertBySql](https://github.com/mailru/dbr/blob/v1.2.0/insert_builder.go#L67) [¶](#Tx.InsertBySql)
added in v1.2.0
```
func (tx *[Tx](#Tx)) InsertBySql(query [string](/builtin#string), value ...interface{}) [InsertBuilder](#InsertBuilder)
```
InsertBySql creates a InsertBuilder from raw query
####
func (*Tx) [InsertInto](https://github.com/mailru/dbr/blob/v1.2.0/insert_builder.go#L45) [¶](#Tx.InsertInto)
```
func (tx *[Tx](#Tx)) InsertInto(table [string](/builtin#string)) [InsertBuilder](#InsertBuilder)
```
InsertInto creates a InsertBuilder
####
func (*Tx) [Query](https://github.com/mailru/dbr/blob/v1.2.0/dbr_go18.go#L29) [¶](#Tx.Query)
added in v1.2.0
```
func (tx *[Tx](#Tx)) Query(query [string](/builtin#string), args ...interface{}) (*[sql](/database/sql).[Rows](/database/sql#Rows), [error](/builtin#error))
```
Query executes a query that returns rows, typically a SELECT.
The args are for any placeholder parameters in the query.
####
func (*Tx) [Rollback](https://github.com/mailru/dbr/blob/v1.2.0/transaction.go#L48) [¶](#Tx.Rollback)
```
func (tx *[Tx](#Tx)) Rollback() [error](/builtin#error)
```
Rollback cancels the transaction
####
func (*Tx) [RollbackUnlessCommitted](https://github.com/mailru/dbr/blob/v1.2.0/transaction.go#L60) [¶](#Tx.RollbackUnlessCommitted)
```
func (tx *[Tx](#Tx)) RollbackUnlessCommitted()
```
RollbackUnlessCommitted rollsback the transaction unless it has already been committed or rolled back.
Useful to defer tx.RollbackUnlessCommitted() -- so you don't have to handle N failure cases Keep in mind the only way to detect an error on the rollback is via the event log.
####
func (*Tx) [Select](https://github.com/mailru/dbr/blob/v1.2.0/select_builder.go#L73) [¶](#Tx.Select)
```
func (tx *[Tx](#Tx)) Select(column ...[string](/builtin#string)) [SelectBuilder](#SelectBuilder)
```
Select creates a SelectBuilder
####
func (*Tx) [SelectBySql](https://github.com/mailru/dbr/blob/v1.2.0/select_builder.go#L95) [¶](#Tx.SelectBySql)
```
func (tx *[Tx](#Tx)) SelectBySql(query [string](/builtin#string), value ...interface{}) [SelectBuilder](#SelectBuilder)
```
SelectBySql creates a SelectBuilder from raw query
####
func (*Tx) [Update](https://github.com/mailru/dbr/blob/v1.2.0/update_builder.go#L44) [¶](#Tx.Update)
```
func (tx *[Tx](#Tx)) Update(table [string](/builtin#string)) [UpdateBuilder](#UpdateBuilder)
```
Update creates a UpdateBuilder
####
func (*Tx) [UpdateBySql](https://github.com/mailru/dbr/blob/v1.2.0/update_builder.go#L68) [¶](#Tx.UpdateBySql)
```
func (tx *[Tx](#Tx)) UpdateBySql(query [string](/builtin#string), value ...interface{}) [UpdateBuilder](#UpdateBuilder)
```
UpdateBySql creates a UpdateBuilder from raw query
####
type [UpdateBuilder](https://github.com/mailru/dbr/blob/v1.2.0/update_builder.go#L10) [¶](#UpdateBuilder)
```
type UpdateBuilder interface {
[Builder](#Builder)
[EventReceiver](#EventReceiver)
[Executer](#Executer)
Where(query interface{}, value ...interface{}) [UpdateBuilder](#UpdateBuilder)
Set(column [string](/builtin#string), value interface{}) [UpdateBuilder](#UpdateBuilder)
SetMap(m map[[string](/builtin#string)]interface{}) [UpdateBuilder](#UpdateBuilder)
Limit(n [uint64](/builtin#uint64)) [UpdateBuilder](#UpdateBuilder)
}
```
UpdateBuilder builds `UPDATE ...`
####
type [UpdateStmt](https://github.com/mailru/dbr/blob/v1.2.0/update.go#L6) [¶](#UpdateStmt)
added in v1.2.0
```
type UpdateStmt interface {
[Builder](#Builder)
Where(query interface{}, value ...interface{}) [UpdateStmt](#UpdateStmt)
Set(column [string](/builtin#string), value interface{}) [UpdateStmt](#UpdateStmt)
SetMap(m map[[string](/builtin#string)]interface{}) [UpdateStmt](#UpdateStmt)
SetRecord(structValue interface{}) [UpdateStmt](#UpdateStmt)
}
```
UpdateStmt builds `UPDATE ...`
####
func [Update](https://github.com/mailru/dbr/blob/v1.2.0/update.go#L65) [¶](#Update)
added in v1.2.0
```
func Update(table [string](/builtin#string)) [UpdateStmt](#UpdateStmt)
```
Update creates an UpdateStmt
####
func [UpdateBySql](https://github.com/mailru/dbr/blob/v1.2.0/update.go#L77) [¶](#UpdateBySql)
added in v1.2.0
```
func UpdateBySql(query [string](/builtin#string), value ...interface{}) [UpdateStmt](#UpdateStmt)
```
UpdateBySql creates an UpdateStmt with raw query |
gamlss | cran | R | Package ‘gamlss.cens’
October 7, 2023
Type Package
Title Fitting an Interval Response Variable Using `gamlss.family'
Distributions
Version 5.0-7
Date 2023-10-08
Depends R (>= 2.2.1), gamlss.dist, gamlss, survival, methods
Author <NAME> <<EMAIL>>, <NAME>, <NAME>, <NAME>
Maintainer <NAME> <<EMAIL>>
Description This is an add-on package to GAMLSS. The purpose of this
package is to allow users to fit interval response variables in
GAMLSS models. The main function gen.cens() generates a
censored version of an existing GAMLSS family distribution.
License GPL-2 | GPL-3
URL https://www.gamlss.com/
NeedsCompilation no
Repository CRAN
Date/Publication 2023-10-07 14:20:02 UTC
R topics documented:
gamlss.cens-packag... 2
cen... 4
cens.... 6
cens.... 8
cens.... 9
gen.cen... 10
li... 12
gamlss.cens-package Fitting an Interval Response Variable Using ‘gamlss.family’ Distribu-
tions
Description
This is an add-on package to GAMLSS. The purpose of this package is to allow users to fit interval
response variables in GAMLSS models. The main function gen.cens() generates a censored version
of an existing GAMLSS family distribution.
Details
The DESCRIPTION file:
Package: gamlss.cens
Type: Package
Title: Fitting an Interval Response Variable Using ‘gamlss.family’ Distributions
Version: 5.0-7
Date: 2023-10-08
Depends: R (>= 2.2.1), gamlss.dist, gamlss, survival, methods
Author: <NAME> <<EMAIL>>, <NAME>, <NAME>, <NAME>
Maintainer: <NAME> <<EMAIL>>
Description: This is an add-on package to GAMLSS. The purpose of this package is to allow users to fit interval response va
License: GPL-2 | GPL-3
URL: https://www.gamlss.com/
Index of help topics:
cens Function to Fit Censored Data Using a
gamlss.family Distribution
cens.d Censored Probability Density Function of a
gamlss.family Distribution
cens.p Censored Cumulative Probability Density
Function of a gamlss.family Distribution
cens.q Censored Inverse Cumulative Probability Density
Function of a gamlss.family Distribution
gamlss.cens-package Fitting an Interval Response Variable Using
'gamlss.family' Distributions
gen.cens A Function to Generate Appropriate Functions to
Be Used to Fit a Censored Response variable in
GAMLSS
lip Data for lip
Author(s)
<NAME> <<EMAIL>>, <NAME>, <NAME>, <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>. and <NAME>. (2005). Generalized additive models for location, scale and
shape,(with discussion), Appl. Statist., 54, part 3, pp 507-554.
<NAME>., <NAME>., <NAME>., and <NAME>. (2019) Distributions for
modeling location, scale, and shape: Using GAMLSS in R, Chapman and Hall/CRC. An older
version can be found in https://www.gamlss.com/.
<NAME>. <NAME>. (2007) Generalized additive models for location scale and shape
(GAMLSS) in R. Journal of Statistical Software, Vol. 23, Issue 7, Dec 2007, https://www.
jstatsoft.org/v23/i07/.
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>., (2017) Flexible
Regression and Smoothing: Using GAMLSS in R, Chapman and Hall/CRC.
(see also https://www.gamlss.com/).
See Also
gamlss, gamlss.family
Examples
library(survival)
library(gamlss)
library(gamlss.dist)
# comparing results with package survival
# fitting the exponential distribution
ms1<-survreg(Surv(futime, fustat) ~ ecog.ps + rx, ovarian,
dist='exponential')
mg1<-gamlss(Surv(futime, fustat) ~ ecog.ps + rx, data=ovarian,
family=cens(EXP),c.crit=0.00001)
if(abs(-2*ms1$loglik[2]-deviance(mg1))>0.001) stop(paste("descrepancies in exp"))
if(sum(coef(ms1)-coef(mg1))>0.001) warning(paste("descrepancies in coef in exp"))
summary(ms1)
summary(mg1)
# fitting the Weibull distribution
ms2 <-survreg(Surv(futime, fustat) ~ ecog.ps + rx, ovarian, dist='weibull')
mg2 <-gamlss(Surv(futime, fustat) ~ ecog.ps + rx, data=ovarian,
family=cens(WEI, delta=c(0.001,0.001)), c.crit=0.00001)
if(abs(-2*ms2$loglik[2]-deviance(mg2))>0.005)
stop(paste("descrepancies in deviance in WEI"))
summary(ms2);summary(mg2)
# compare the scale parameter
1/exp(coef(mg2,"sigma"))
# now fit the Weibull in different parameterrazions
mg21<-gamlss(Surv(futime, fustat) ~ ecog.ps + rx, data=ovarian,
family=cens(WEI2), method=mixed(2,30))
mg21<-gamlss(Surv(futime, fustat) ~ ecog.ps + rx, data=ovarian,
family=cens(WEI3))
cens Function to Fit Censored Data Using a gamlss.family Distribution
Description
This function can be used to fit censored or interval response variables. It takes as an argument an
existing gamlss.family distribution and generates a new gamlss.family object which then can
be used to fit right, left or interval censored data.
Usage
cens(family = "NO", type = c("right", "left", "interval"), name = "cens",
local = TRUE, delta = NULL, ...)
Arguments
family a gamlss.family object, which is used to define the distribution and the link
functions of the various parameters. The distribution families supported by
gamlss() can be found in gamlss.family and in the package gamlss.dist.
name the characters you want to add to the name of new functions, by default is cens
type what type of censoring is required, right, left or interval.
local if TRUE the function will try to find the environment of gamlss to generate the d
and p functions required for the fitting, if FALSE the functions will be generated
in the global environment
delta the delta increment used in the numerical derivatives
... for extra arguments
Details
This function is created to help users to fit censored data using an existing gamlss.family distribu-
tion. It does this by taking an existing gamlss.family and changing some of the components of the
distribution to help the fitting process. It particular it (i) creates a (d) function (for calculating the
censored likelihood) and a (p) function (for generating the quantile residuals) within gamlss, (ii)
changes the global deviance function G.dev.incr, the first derivative functions (see note below)
and other quantities from the original distribution.
Value
It returns a gamlss.family object which has all the components needed for fitting a distribution in
gamlss.
Note
This function is experimental and could be changed in the future. The function cens changes the
first derivatives of the original gamlss family d function to numerical derivatives for the new cen-
sored d function. The default increment delta, for this numerical derivatives function, is eps *
pmax(abs(x), 1) where eps<-sqrt(.Machine$double.eps). The default delta could be inap-
propriate for specific applications and can be overwritten by using the argument delta.
Note that in order to get the correct standard errors you have to generate the "d" function by using
gen.cens().
Author(s)
<NAME> <<EMAIL>> and <NAME> <<EMAIL>>
References
<NAME>. and <NAME>. (2005). Generalized additive models for location, scale and
shape,(with discussion), Appl. Statist., 54, part 3, 1-38.
<NAME>., <NAME>., <NAME>., and <NAME>. (2019) Distributions for
modeling location, scale, and shape: Using GAMLSS in R, Chapman and Hall/CRC. An older
version can be found in https://www.gamlss.com/.
<NAME>. <NAME>. (2007) Generalized additive models for location scale and shape
(GAMLSS) in R. Journal of Statistical Software, Vol. 23, Issue 7, Dec 2007, https://www.
jstatsoft.org/v23/i07/.
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>., (2017) Flexible
Regression and Smoothing: Using GAMLSS in R, Chapman and Hall/CRC.
(see also https://www.gamlss.com/).
See Also
cens.d, cens.p, gen.cens
Examples
# comparing output with the survreg() of package survival
library(gamlss.dist)
library(survival)
#--------------------------------------------------------------------
# right censoring example
# example from survreg()
# fitting the exponential distribution
mexp<-survreg(Surv(futime, fustat) ~ ecog.ps + rx, ovarian, dist='exponential')
gexp<-gamlss(Surv(futime, fustat) ~ ecog.ps + rx, data=ovarian,
family=cens(EXP), c.crit=0.00001)
if(abs(-2*mexp$loglik[2]-deviance(gexp))>0.001)
stop(paste("descrepancies in exponential models"))
if(sum(coef(mexp)-coef(gexp))>0.001)
warning(paste("descrepancies in coef in exponential models"))
summary(mexp)
gen.cens(EXP)
summary(gexp)
# fitting different distributions
# weibull
mwei <-survreg(Surv(futime, fustat) ~ ecog.ps + rx, ovarian, dist='weibull')
gwei<-gamlss(Surv(futime, fustat) ~ ecog.ps + rx, data=ovarian,
family=cens(WEI, delta=c(0.0001,0.0001)), c.crit=0.00001)
if(abs(-2*mwei$loglik[2]-deviance(gwei))>0.005)
stop(paste("descrepancies in deviance in WEI"))
scoef <- sum(coef(mwei)-coef(gwei))
if(abs(scoef)>0.005)
warning(cat("descrepancies in coef in WEI of ", scoef, "\n"))
# WEI3 is weibull parametrised with mu as the mean
gwei3 <- gamlss(Surv(futime, fustat) ~ ecog.ps + rx, data=ovarian,
family=cens(WEI3))
# log normal
mlogno <-survreg(Surv(futime, fustat) ~ ecog.ps + rx, ovarian,
dist='lognormal')
glogno<-gamlss(Surv(futime, fustat) ~ ecog.ps + rx, data=ovarian,
family=cens(LOGNO, delta=c(0.001,0.001)), c.cyc=0.00001)
if(abs(-2*mlogno$loglik[2]-deviance(glogno))>0.005)
stop(paste("descrepancies in deviance in LOGNO"))
coef(mlogno);coef(glogno)
#--------------------------------------------------------------------
# now interval response variable
data(lip)
with(lip, y)
mg1<-survreg(y ~ poly(Tem,2)+poly(pH,2)+poly(aw,2), data=lip, dist="weibull")
gg1<- gamlss(y ~ poly(Tem,2)+poly(pH,2)+poly(aw,2), data=lip,
family=cens(WEI,type="interval"), c.crit=0.00001, n.cyc=200, trace=FALSE)
summary(mg1)
gen.cens(WEI,type="interval")
summary(gg1)
#--------------------------------------------------------------------
# now fitting discretised continuous distribution to count data
# fitting discretised Gamma
data(species)
# first generate the distributions
gen.cens(GA, type="interval")
gen.cens(IG, type="interval")
mGA<-gamlss(Surv(fish,fish+1,type= "interval2")~log(lake)+I(log(lake)^2),
sigma.fo=~log(lake), data=species, family=GAic)
# fitting discretised inverse Gaussian
mIG<-gamlss(Surv(fish,fish+1,type= "interval2")~log(lake)+I(log(lake)^2),
sigma.fo=~log(lake), data=species, family=IGic)
AIC(mGA,mIG)
plot(fish~log(lake), data=species)
with(species, lines(log(lake)[order(lake)], fitted(mIG)[order(lake)]))
#--------------------------------------------------------------------
cens.d Censored Probability Density Function of a gamlss.family Distribu-
tion
Description
Creates a probability density function from a current gamlss.family distribution to be used for
fitting a censored or interval response variable.
Usage
cens.d(family = "NO", type = c("right", "left", "interval"), ...)
Arguments
family a gamlss.family object, which is used to define the distribution and the link
functions of the various parameters. The distribution families supported by
gamlss() can be found in gamlss.family and in the package gamlss.dist.
type whether right, left or in interval censoring is required, (right is the default)
... for extra arguments
Details
This function is used to calculate the likelihood function for censored data. This function is not
supposed to be used on its own but it is used in function gen.cens.
Value
Returns a modified d family function. The argument of the original function d function are the
same.
Note
For an example see gen.cens()
Author(s)
<NAME> <<EMAIL>> and <NAME>
References
<NAME>. and <NAME>. (2005). Generalized additive models for location, scale and
shape,(with discussion), Appl. Statist., 54, part 3, 1-38.
<NAME>., <NAME>., <NAME>., and <NAME>. (2019) Distributions for
modeling location, scale, and shape: Using GAMLSS in R, Chapman and Hall/CRC. An older
version can be found in https://www.gamlss.com/.
<NAME>. <NAME>. (2007) Generalized additive models for location scale and shape
(GAMLSS) in R. Journal of Statistical Software, Vol. 23, Issue 7, Dec 2007, https://www.
jstatsoft.org/v23/i07/.
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>., (2017) Flexible
Regression and Smoothing: Using GAMLSS in R, Chapman and Hall/CRC.
(see also https://www.gamlss.com/).
See Also
cens.p, gen.cens
Examples
#see the help for function cens for an example
cens.p Censored Cumulative Probability Density Function of a gamlss.family
Distribution
Description
Creates a cumulative density function from a current gamlss.family distribution suitable for cen-
sored or interval response variable data.
Usage
cens.p(family = "NO", type = c("right", "left", "interval"), ...)
Arguments
family a gamlss.family object, which is used to define the distribution and the link
functions of the various parameters. The distribution families supported by
gamlss() can be found in gamlss.family.
type whether right, left or in interval censoring is required, (right is the default)
... for extra arguments
Details
This function is used to calculate the quantile residuals for censored data distributions. This function
is not supposed to be used on its own but it is used in the function gen.cens.
Value
Returns a modified p family function. The argument of the original function d function are the
same.
Note
For an example see gen.cens()
Author(s)
<NAME> <<EMAIL>> and <NAME> <<EMAIL>>
References
<NAME>. and <NAME>. (2005). Generalized additive models for location, scale and
shape,(with discussion), Appl. Statist., 54, part 3, 1-38.
<NAME>., <NAME>., <NAME>., and <NAME>. (2019) Distributions for
modeling location, scale, and shape: Using GAMLSS in R, Chapman and Hall/CRC. An older
version can be found in https://www.gamlss.com/.
<NAME>. <NAME>. (2007) Generalized additive models for location scale and shape
(GAMLSS) in R. Journal of Statistical Software, Vol. 23, Issue 7, Dec 2007, https://www.
jstatsoft.org/v23/i07/.
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>., (2017) Flexible
Regression and Smoothing: Using GAMLSS in R, Chapman and Hall/CRC.
(see also https://www.gamlss.com/).
See Also
cens.d, gen.cens
Examples
#see the help for function cens for an example
cens.q Censored Inverse Cumulative Probability Density Function of a
gamlss.family Distribution
Description
Creates the inverse cumulative density function from a current gamlss.family distribution suitable
for censored or interval response variable data. This is a dummy function identical to the uncensored
one but it is needed for consistency in centile estimation from censored data.
Usage
cens.q(family = "NO", ...)
Arguments
family a gamlss.family object, which is used to define the distribution and the link
functions of the various parameters. The distribution families supported by
gamlss() can be found in gamlss.family.
... for extra arguments
Details
This is dummy function, used only to calculate centiles from censored response variable. This
function is not supposed to be used on its own but is used by the function gen.cens.
Value
Returns a modified q family function. The argument of the original function q function are the
same.
Note
For an example see gen.cens()
Author(s)
<NAME> <<EMAIL>> and <NAME> <<EMAIL>>
References
<NAME>. and <NAME>. (2005). Generalized additive models for location, scale and
shape,(with discussion), Appl. Statist., 54, part 3, 1-38.
<NAME>., <NAME>., <NAME>., and <NAME>. (2019) Distributions for
modeling location, scale, and shape: Using GAMLSS in R, Chapman and Hall/CRC. An older
version can be found in https://www.gamlss.com/.
<NAME>. <NAME>. (2007) Generalized additive models for location scale and shape
(GAMLSS) in R. Journal of Statistical Software, Vol. 23, Issue 7, Dec 2007, https://www.
jstatsoft.org/v23/i07/.
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>., (2017) Flexible
Regression and Smoothing: Using GAMLSS in R, Chapman and Hall/CRC.
(see also https://www.gamlss.com/).
See Also
cens.d, cens.p, gen.cens
Examples
#see the help for function cens for an example
gen.cens A Function to Generate Appropriate Functions to Be Used to Fit a
Censored Response variable in GAMLSS
Description
The gen.cens() function allows the user to generate a d, p, (dummy) q and fitting gamlss functions
for censor and interval response variables. The function can take any gamlss.family distribution.
Usage
gen.cens(family = "NO", type = c("right", "left", "interval"),
name = "cens", ...)
Arguments
family a gamlss.family object, which is used to define the distribution and the link
functions of the various parameters. The distribution families supported by
gamlss() can be found in gamlss.family and in the package gamlss.dist.
name the characters you want to add to the name of new functions, by default is the
first letter of type and c i.e WEIic for WEI (weibull) interval response variable
type whether right, left or in interval censoring is required, (right is the default)
... for extra arguments
Value
Returns the d, p, (dummy) q and the fitting used in the fitting gamlss algorithm (The one used in the
fitting gamlss algorithm) of a gamlss.family distribution.
Author(s)
<NAME> <<EMAIL>> and <NAME>
References
<NAME>. and <NAME>. (2005). Generalized additive models for location, scale and
shape,(with discussion), Appl. Statist., 54, part 3, 1-38.
<NAME>., <NAME>., <NAME>., and <NAME>. (2019) Distributions for
modeling location, scale, and shape: Using GAMLSS in R, Chapman and Hall/CRC. An older
version can be found in https://www.gamlss.com/.
<NAME>. <NAME>. (2007) Generalized additive models for location scale and shape
(GAMLSS) in R. Journal of Statistical Software, Vol. 23, Issue 7, Dec 2007, https://www.
jstatsoft.org/v23/i07/.
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>., (2017) Flexible
Regression and Smoothing: Using GAMLSS in R, Chapman and Hall/CRC.
(see also https://www.gamlss.com/).
See Also
cens.d, cens.p, cens
Examples
library(gamlss.dist)
data(lip)
gen.cens(WEI,type="interval")
WEIic
gg1<- gamlss(y ~ poly(Tem,2)+poly(pH,2)+poly(aw,2), data=lip,
family=WEIic, c.crit=0.00001, n.cyc=200, trace=FALSE)
lip Data for lip
Description
The data set used in this package are collected by Dr <NAME> (University of Leipzig) and
passed on to use by professor <NAME> of London Metropolitan University.
It consists of experimental enzymology results from a research project which attempted to develop
a generic food spoilage model.
The data set contains a column called NAMES, which shows the experiment name, three columns
with values of the environmental conditions: temperature (Tem), pH and water activity (aw), and the
rest of the columns contains the activity of the cocktails, observed at certain days.
The researchers recorded the activity of proteases and lipases in each cocktail and were interested
in predicting the time when the activity started given the environmental conditions. The activity is
a positive integer and enzymes are considered inactive when activity=0.
Usage
data(lip)
Format
A data frame with 120 observations on the following 14 variables.
name a factor with levels the different experiment
Tem a numeric vector showing the temperature
pH a numeric vector PH
aw a numeric vector water activity
X0.d a numeric vector if enzyme reacted at day 0
X1.d a numeric vector if enzyme reacted at day 1
X2.d a numeric vector if enzyme reacted at day 2
X4.d a numeric vector if enzyme reacted at days 3 or 4
X11.d a numeric vector if enzyme reacted at days 5 to 11
X18d. a numeric vector if enzyme reacted at days 12 to q18
X25.d a numeric vector if enzyme reacted at days 19 to 25
X32.d a numeric vector if enzyme reacted at days 26 to 32
X39.d a numeric vector if enzyme reacted at days 33 to 39
y a matrix with 3 columns: this is a Surv() object indicating the start the finish and censored
indicator as defined in function Surv() of survival.
Source
Prof. <NAME>, London Metropolitan University
lip 13
Examples
data(lip)
with(lip, y) |
rusoto_schemas | rust | Rust | Crate rusoto_schemas
===
Amazon EventBridge Schema Registry
If you’re using the service, you’re probably looking for SchemasClient and Schemas.
Structs
---
CreateDiscovererRequestCreateDiscovererResponseCreateRegistryRequestCreateRegistryResponseCreateSchemaRequestCreateSchemaResponseDeleteDiscovererRequestDeleteRegistryRequestDeleteResourcePolicyRequestDeleteSchemaRequestDeleteSchemaVersionRequestDescribeCodeBindingRequestDescribeCodeBindingResponseDescribeDiscovererRequestDescribeDiscovererResponseDescribeRegistryRequestDescribeRegistryResponseDescribeSchemaRequestDescribeSchemaResponseDiscovererSummaryExportSchemaRequestExportSchemaResponseGetCodeBindingSourceRequestGetCodeBindingSourceResponseGetDiscoveredSchemaRequestGetDiscoveredSchemaResponseGetResourcePolicyRequestGetResourcePolicyResponseListDiscoverersRequestListDiscoverersResponseListRegistriesRequestListRegistriesResponseListSchemaVersionsRequestListSchemaVersionsResponseListSchemasRequestListSchemasResponseListTagsForResourceRequestListTagsForResourceResponsePutCodeBindingRequestPutCodeBindingResponsePutResourcePolicyRequestThe name of the policy.
PutResourcePolicyResponseRegistrySummarySchemaSummaryA summary of schema details.
SchemaVersionSummarySchemasClientA client for the Schemas API.
SearchSchemaSummarySearchSchemaVersionSummarySearchSchemasRequestSearchSchemasResponseStartDiscovererRequestStartDiscovererResponseStopDiscovererRequestStopDiscovererResponseTagResourceRequestUntagResourceRequestUpdateDiscovererRequestUpdateDiscovererResponseUpdateRegistryRequestUpdates the registry.
UpdateRegistryResponseUpdateSchemaRequestUpdateSchemaResponseEnums
---
CreateDiscovererErrorErrors returned by CreateDiscoverer
CreateRegistryErrorErrors returned by CreateRegistry
CreateSchemaErrorErrors returned by CreateSchema
DeleteDiscovererErrorErrors returned by DeleteDiscoverer
DeleteRegistryErrorErrors returned by DeleteRegistry
DeleteResourcePolicyErrorErrors returned by DeleteResourcePolicy
DeleteSchemaErrorErrors returned by DeleteSchema
DeleteSchemaVersionErrorErrors returned by DeleteSchemaVersion
DescribeCodeBindingErrorErrors returned by DescribeCodeBinding
DescribeDiscovererErrorErrors returned by DescribeDiscoverer
DescribeRegistryErrorErrors returned by DescribeRegistry
DescribeSchemaErrorErrors returned by DescribeSchema
ExportSchemaErrorErrors returned by ExportSchema
GetCodeBindingSourceErrorErrors returned by GetCodeBindingSource
GetDiscoveredSchemaErrorErrors returned by GetDiscoveredSchema
GetResourcePolicyErrorErrors returned by GetResourcePolicy
ListDiscoverersErrorErrors returned by ListDiscoverers
ListRegistriesErrorErrors returned by ListRegistries
ListSchemaVersionsErrorErrors returned by ListSchemaVersions
ListSchemasErrorErrors returned by ListSchemas
ListTagsForResourceErrorErrors returned by ListTagsForResource
PutCodeBindingErrorErrors returned by PutCodeBinding
PutResourcePolicyErrorErrors returned by PutResourcePolicy
SearchSchemasErrorErrors returned by SearchSchemas
StartDiscovererErrorErrors returned by StartDiscoverer
StopDiscovererErrorErrors returned by StopDiscoverer
TagResourceErrorErrors returned by TagResource
UntagResourceErrorErrors returned by UntagResource
UpdateDiscovererErrorErrors returned by UpdateDiscoverer
UpdateRegistryErrorErrors returned by UpdateRegistry
UpdateSchemaErrorErrors returned by UpdateSchema
Traits
---
SchemasTrait representing the capabilities of the Schemas API. Schemas clients implement this trait.
Crate rusoto_schemas
===
Amazon EventBridge Schema Registry
If you’re using the service, you’re probably looking for SchemasClient and Schemas.
Structs
---
CreateDiscovererRequestCreateDiscovererResponseCreateRegistryRequestCreateRegistryResponseCreateSchemaRequestCreateSchemaResponseDeleteDiscovererRequestDeleteRegistryRequestDeleteResourcePolicyRequestDeleteSchemaRequestDeleteSchemaVersionRequestDescribeCodeBindingRequestDescribeCodeBindingResponseDescribeDiscovererRequestDescribeDiscovererResponseDescribeRegistryRequestDescribeRegistryResponseDescribeSchemaRequestDescribeSchemaResponseDiscovererSummaryExportSchemaRequestExportSchemaResponseGetCodeBindingSourceRequestGetCodeBindingSourceResponseGetDiscoveredSchemaRequestGetDiscoveredSchemaResponseGetResourcePolicyRequestGetResourcePolicyResponseListDiscoverersRequestListDiscoverersResponseListRegistriesRequestListRegistriesResponseListSchemaVersionsRequestListSchemaVersionsResponseListSchemasRequestListSchemasResponseListTagsForResourceRequestListTagsForResourceResponsePutCodeBindingRequestPutCodeBindingResponsePutResourcePolicyRequestThe name of the policy.
PutResourcePolicyResponseRegistrySummarySchemaSummaryA summary of schema details.
SchemaVersionSummarySchemasClientA client for the Schemas API.
SearchSchemaSummarySearchSchemaVersionSummarySearchSchemasRequestSearchSchemasResponseStartDiscovererRequestStartDiscovererResponseStopDiscovererRequestStopDiscovererResponseTagResourceRequestUntagResourceRequestUpdateDiscovererRequestUpdateDiscovererResponseUpdateRegistryRequestUpdates the registry.
UpdateRegistryResponseUpdateSchemaRequestUpdateSchemaResponseEnums
---
CreateDiscovererErrorErrors returned by CreateDiscoverer
CreateRegistryErrorErrors returned by CreateRegistry
CreateSchemaErrorErrors returned by CreateSchema
DeleteDiscovererErrorErrors returned by DeleteDiscoverer
DeleteRegistryErrorErrors returned by DeleteRegistry
DeleteResourcePolicyErrorErrors returned by DeleteResourcePolicy
DeleteSchemaErrorErrors returned by DeleteSchema
DeleteSchemaVersionErrorErrors returned by DeleteSchemaVersion
DescribeCodeBindingErrorErrors returned by DescribeCodeBinding
DescribeDiscovererErrorErrors returned by DescribeDiscoverer
DescribeRegistryErrorErrors returned by DescribeRegistry
DescribeSchemaErrorErrors returned by DescribeSchema
ExportSchemaErrorErrors returned by ExportSchema
GetCodeBindingSourceErrorErrors returned by GetCodeBindingSource
GetDiscoveredSchemaErrorErrors returned by GetDiscoveredSchema
GetResourcePolicyErrorErrors returned by GetResourcePolicy
ListDiscoverersErrorErrors returned by ListDiscoverers
ListRegistriesErrorErrors returned by ListRegistries
ListSchemaVersionsErrorErrors returned by ListSchemaVersions
ListSchemasErrorErrors returned by ListSchemas
ListTagsForResourceErrorErrors returned by ListTagsForResource
PutCodeBindingErrorErrors returned by PutCodeBinding
PutResourcePolicyErrorErrors returned by PutResourcePolicy
SearchSchemasErrorErrors returned by SearchSchemas
StartDiscovererErrorErrors returned by StartDiscoverer
StopDiscovererErrorErrors returned by StopDiscoverer
TagResourceErrorErrors returned by TagResource
UntagResourceErrorErrors returned by UntagResource
UpdateDiscovererErrorErrors returned by UpdateDiscoverer
UpdateRegistryErrorErrors returned by UpdateRegistry
UpdateSchemaErrorErrors returned by UpdateSchema
Traits
---
SchemasTrait representing the capabilities of the Schemas API. Schemas clients implement this trait.
Struct rusoto_schemas::SchemasClient
===
```
pub struct SchemasClient { /* private fields */ }
```
A client for the Schemas API.
Implementations
---
source### impl SchemasClient
source#### pub fn new(region: Region) -> SchemasClient
Creates a client backed by the default tokio event loop.
The client will use the default credentials provider and tls client.
source#### pub fn new_with<P, D>( request_dispatcher: D, credentials_provider: P, region: Region) -> SchemasClient where P: ProvideAwsCredentials + Send + Sync + 'static, D: DispatchSignedRequest + Send + Sync + 'static,
source#### pub fn new_with_client(client: Client, region: Region) -> SchemasClient
Trait Implementations
---
source### impl Clone for SchemasClient
source#### fn clone(&self) -> SchemasClient
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Schemas for SchemasClient
source#### fn create_discoverer<'life0, 'async_trait>( &'life0 self, input: CreateDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<CreateDiscovererResponse, RusotoError<CreateDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a discoverer.
source#### fn create_registry<'life0, 'async_trait>( &'life0 self, input: CreateRegistryRequest) -> Pin<Box<dyn Future<Output = Result<CreateRegistryResponse, RusotoError<CreateRegistryError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a registry.
source#### fn create_schema<'life0, 'async_trait>( &'life0 self, input: CreateSchemaRequest) -> Pin<Box<dyn Future<Output = Result<CreateSchemaResponse, RusotoError<CreateSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a schema definition.
Inactive schemas will be deleted after two years.
source#### fn delete_discoverer<'life0, 'async_trait>( &'life0 self, input: DeleteDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a discoverer.
source#### fn delete_registry<'life0, 'async_trait>( &'life0 self, input: DeleteRegistryRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteRegistryError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a Registry.
source#### fn delete_resource_policy<'life0, 'async_trait>( &'life0 self, input: DeleteResourcePolicyRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteResourcePolicyError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Delete the resource-based policy attached to the specified registry.
source#### fn delete_schema<'life0, 'async_trait>( &'life0 self, input: DeleteSchemaRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Delete a schema definition.
source#### fn delete_schema_version<'life0, 'async_trait>( &'life0 self, input: DeleteSchemaVersionRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteSchemaVersionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Delete the schema version definition
source#### fn describe_code_binding<'life0, 'async_trait>( &'life0 self, input: DescribeCodeBindingRequest) -> Pin<Box<dyn Future<Output = Result<DescribeCodeBindingResponse, RusotoError<DescribeCodeBindingError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describe the code binding URI.
source#### fn describe_discoverer<'life0, 'async_trait>( &'life0 self, input: DescribeDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<DescribeDiscovererResponse, RusotoError<DescribeDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes the discoverer.
source#### fn describe_registry<'life0, 'async_trait>( &'life0 self, input: DescribeRegistryRequest) -> Pin<Box<dyn Future<Output = Result<DescribeRegistryResponse, RusotoError<DescribeRegistryError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes the registry.
source#### fn describe_schema<'life0, 'async_trait>( &'life0 self, input: DescribeSchemaRequest) -> Pin<Box<dyn Future<Output = Result<DescribeSchemaResponse, RusotoError<DescribeSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Retrieve the schema definition.
source#### fn get_code_binding_source<'life0, 'async_trait>( &'life0 self, input: GetCodeBindingSourceRequest) -> Pin<Box<dyn Future<Output = Result<GetCodeBindingSourceResponse, RusotoError<GetCodeBindingSourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Get the code binding source URI.
source#### fn get_discovered_schema<'life0, 'async_trait>( &'life0 self, input: GetDiscoveredSchemaRequest) -> Pin<Box<dyn Future<Output = Result<GetDiscoveredSchemaResponse, RusotoError<GetDiscoveredSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Get the discovered schema that was generated based on sampled events.
source#### fn get_resource_policy<'life0, 'async_trait>( &'life0 self, input: GetResourcePolicyRequest) -> Pin<Box<dyn Future<Output = Result<GetResourcePolicyResponse, RusotoError<GetResourcePolicyError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Retrieves the resource-based policy attached to a given registry.
source#### fn list_discoverers<'life0, 'async_trait>( &'life0 self, input: ListDiscoverersRequest) -> Pin<Box<dyn Future<Output = Result<ListDiscoverersResponse, RusotoError<ListDiscoverersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
List the discoverers.
source#### fn list_registries<'life0, 'async_trait>( &'life0 self, input: ListRegistriesRequest) -> Pin<Box<dyn Future<Output = Result<ListRegistriesResponse, RusotoError<ListRegistriesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
List the registries.
source#### fn list_schema_versions<'life0, 'async_trait>( &'life0 self, input: ListSchemaVersionsRequest) -> Pin<Box<dyn Future<Output = Result<ListSchemaVersionsResponse, RusotoError<ListSchemaVersionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Provides a list of the schema versions and related information.
source#### fn list_schemas<'life0, 'async_trait>( &'life0 self, input: ListSchemasRequest) -> Pin<Box<dyn Future<Output = Result<ListSchemasResponse, RusotoError<ListSchemasError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
List the schemas.
source#### fn list_tags_for_resource<'life0, 'async_trait>( &'life0 self, input: ListTagsForResourceRequest) -> Pin<Box<dyn Future<Output = Result<ListTagsForResourceResponse, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Get tags for resource.
source#### fn put_code_binding<'life0, 'async_trait>( &'life0 self, input: PutCodeBindingRequest) -> Pin<Box<dyn Future<Output = Result<PutCodeBindingResponse, RusotoError<PutCodeBindingError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Put code binding URI
source#### fn put_resource_policy<'life0, 'async_trait>( &'life0 self, input: PutResourcePolicyRequest) -> Pin<Box<dyn Future<Output = Result<PutResourcePolicyResponse, RusotoError<PutResourcePolicyError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
The name of the policy.
source#### fn search_schemas<'life0, 'async_trait>( &'life0 self, input: SearchSchemasRequest) -> Pin<Box<dyn Future<Output = Result<SearchSchemasResponse, RusotoError<SearchSchemasError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Search the schemas
source#### fn start_discoverer<'life0, 'async_trait>( &'life0 self, input: StartDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<StartDiscovererResponse, RusotoError<StartDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Starts the discoverer
source#### fn stop_discoverer<'life0, 'async_trait>( &'life0 self, input: StopDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<StopDiscovererResponse, RusotoError<StopDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Stops the discoverer
source#### fn tag_resource<'life0, 'async_trait>( &'life0 self, input: TagResourceRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<TagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Add tags to a resource.
source#### fn untag_resource<'life0, 'async_trait>( &'life0 self, input: UntagResourceRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<UntagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Removes tags from a resource.
source#### fn update_discoverer<'life0, 'async_trait>( &'life0 self, input: UpdateDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<UpdateDiscovererResponse, RusotoError<UpdateDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates the discoverer
source#### fn update_registry<'life0, 'async_trait>( &'life0 self, input: UpdateRegistryRequest) -> Pin<Box<dyn Future<Output = Result<UpdateRegistryResponse, RusotoError<UpdateRegistryError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates a registry.
source#### fn update_schema<'life0, 'async_trait>( &'life0 self, input: UpdateSchemaRequest) -> Pin<Box<dyn Future<Output = Result<UpdateSchemaResponse, RusotoError<UpdateSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates the schema definition
Inactive schemas will be deleted after two years.
source#### fn export_schema<'life0, 'async_trait>( &'life0 self, input: ExportSchemaRequest) -> Pin<Box<dyn Future<Output = Result<ExportSchemaResponse, RusotoError<ExportSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Auto Trait Implementations
---
### impl !RefUnwindSafe for SchemasClient
### impl Send for SchemasClient
### impl Sync for SchemasClient
### impl Unpin for SchemasClient
### impl !UnwindSafe for SchemasClient
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Trait rusoto_schemas::Schemas
===
```
pub trait Schemas {
fn create_discoverer<'life0, 'async_trait>(
&'life0 self,
input: CreateDiscovererRequest
) -> Pin<Box<dyn Future<Output = Result<CreateDiscovererResponse, RusotoError<CreateDiscovererError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_registry<'life0, 'async_trait>(
&'life0 self,
input: CreateRegistryRequest
) -> Pin<Box<dyn Future<Output = Result<CreateRegistryResponse, RusotoError<CreateRegistryError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_schema<'life0, 'async_trait>(
&'life0 self,
input: CreateSchemaRequest
) -> Pin<Box<dyn Future<Output = Result<CreateSchemaResponse, RusotoError<CreateSchemaError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_discoverer<'life0, 'async_trait>(
&'life0 self,
input: DeleteDiscovererRequest
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteDiscovererError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_registry<'life0, 'async_trait>(
&'life0 self,
input: DeleteRegistryRequest
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteRegistryError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_resource_policy<'life0, 'async_trait>(
&'life0 self,
input: DeleteResourcePolicyRequest
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteResourcePolicyError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_schema<'life0, 'async_trait>(
&'life0 self,
input: DeleteSchemaRequest
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteSchemaError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_schema_version<'life0, 'async_trait>(
&'life0 self,
input: DeleteSchemaVersionRequest
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteSchemaVersionError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_code_binding<'life0, 'async_trait>(
&'life0 self,
input: DescribeCodeBindingRequest
) -> Pin<Box<dyn Future<Output = Result<DescribeCodeBindingResponse, RusotoError<DescribeCodeBindingError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_discoverer<'life0, 'async_trait>(
&'life0 self,
input: DescribeDiscovererRequest
) -> Pin<Box<dyn Future<Output = Result<DescribeDiscovererResponse, RusotoError<DescribeDiscovererError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_registry<'life0, 'async_trait>(
&'life0 self,
input: DescribeRegistryRequest
) -> Pin<Box<dyn Future<Output = Result<DescribeRegistryResponse, RusotoError<DescribeRegistryError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_schema<'life0, 'async_trait>(
&'life0 self,
input: DescribeSchemaRequest
) -> Pin<Box<dyn Future<Output = Result<DescribeSchemaResponse, RusotoError<DescribeSchemaError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn export_schema<'life0, 'async_trait>(
&'life0 self,
input: ExportSchemaRequest
) -> Pin<Box<dyn Future<Output = Result<ExportSchemaResponse, RusotoError<ExportSchemaError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn get_code_binding_source<'life0, 'async_trait>(
&'life0 self,
input: GetCodeBindingSourceRequest
) -> Pin<Box<dyn Future<Output = Result<GetCodeBindingSourceResponse, RusotoError<GetCodeBindingSourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn get_discovered_schema<'life0, 'async_trait>(
&'life0 self,
input: GetDiscoveredSchemaRequest
) -> Pin<Box<dyn Future<Output = Result<GetDiscoveredSchemaResponse, RusotoError<GetDiscoveredSchemaError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn get_resource_policy<'life0, 'async_trait>(
&'life0 self,
input: GetResourcePolicyRequest
) -> Pin<Box<dyn Future<Output = Result<GetResourcePolicyResponse, RusotoError<GetResourcePolicyError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_discoverers<'life0, 'async_trait>(
&'life0 self,
input: ListDiscoverersRequest
) -> Pin<Box<dyn Future<Output = Result<ListDiscoverersResponse, RusotoError<ListDiscoverersError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_registries<'life0, 'async_trait>(
&'life0 self,
input: ListRegistriesRequest
) -> Pin<Box<dyn Future<Output = Result<ListRegistriesResponse, RusotoError<ListRegistriesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_schema_versions<'life0, 'async_trait>(
&'life0 self,
input: ListSchemaVersionsRequest
) -> Pin<Box<dyn Future<Output = Result<ListSchemaVersionsResponse, RusotoError<ListSchemaVersionsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_schemas<'life0, 'async_trait>(
&'life0 self,
input: ListSchemasRequest
) -> Pin<Box<dyn Future<Output = Result<ListSchemasResponse, RusotoError<ListSchemasError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_tags_for_resource<'life0, 'async_trait>(
&'life0 self,
input: ListTagsForResourceRequest
) -> Pin<Box<dyn Future<Output = Result<ListTagsForResourceResponse, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn put_code_binding<'life0, 'async_trait>(
&'life0 self,
input: PutCodeBindingRequest
) -> Pin<Box<dyn Future<Output = Result<PutCodeBindingResponse, RusotoError<PutCodeBindingError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn put_resource_policy<'life0, 'async_trait>(
&'life0 self,
input: PutResourcePolicyRequest
) -> Pin<Box<dyn Future<Output = Result<PutResourcePolicyResponse, RusotoError<PutResourcePolicyError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn search_schemas<'life0, 'async_trait>(
&'life0 self,
input: SearchSchemasRequest
) -> Pin<Box<dyn Future<Output = Result<SearchSchemasResponse, RusotoError<SearchSchemasError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn start_discoverer<'life0, 'async_trait>(
&'life0 self,
input: StartDiscovererRequest
) -> Pin<Box<dyn Future<Output = Result<StartDiscovererResponse, RusotoError<StartDiscovererError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn stop_discoverer<'life0, 'async_trait>(
&'life0 self,
input: StopDiscovererRequest
) -> Pin<Box<dyn Future<Output = Result<StopDiscovererResponse, RusotoError<StopDiscovererError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn tag_resource<'life0, 'async_trait>(
&'life0 self,
input: TagResourceRequest
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<TagResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn untag_resource<'life0, 'async_trait>(
&'life0 self,
input: UntagResourceRequest
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<UntagResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_discoverer<'life0, 'async_trait>(
&'life0 self,
input: UpdateDiscovererRequest
) -> Pin<Box<dyn Future<Output = Result<UpdateDiscovererResponse, RusotoError<UpdateDiscovererError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_registry<'life0, 'async_trait>(
&'life0 self,
input: UpdateRegistryRequest
) -> Pin<Box<dyn Future<Output = Result<UpdateRegistryResponse, RusotoError<UpdateRegistryError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn update_schema<'life0, 'async_trait>(
&'life0 self,
input: UpdateSchemaRequest
) -> Pin<Box<dyn Future<Output = Result<UpdateSchemaResponse, RusotoError<UpdateSchemaError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
}
```
Trait representing the capabilities of the Schemas API. Schemas clients implement this trait.
Required Methods
---
source#### fn create_discoverer<'life0, 'async_trait>( &'life0 self, input: CreateDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<CreateDiscovererResponse, RusotoError<CreateDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a discoverer.
source#### fn create_registry<'life0, 'async_trait>( &'life0 self, input: CreateRegistryRequest) -> Pin<Box<dyn Future<Output = Result<CreateRegistryResponse, RusotoError<CreateRegistryError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a registry.
source#### fn create_schema<'life0, 'async_trait>( &'life0 self, input: CreateSchemaRequest) -> Pin<Box<dyn Future<Output = Result<CreateSchemaResponse, RusotoError<CreateSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a schema definition.
Inactive schemas will be deleted after two years.
source#### fn delete_discoverer<'life0, 'async_trait>( &'life0 self, input: DeleteDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a discoverer.
source#### fn delete_registry<'life0, 'async_trait>( &'life0 self, input: DeleteRegistryRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteRegistryError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a Registry.
source#### fn delete_resource_policy<'life0, 'async_trait>( &'life0 self, input: DeleteResourcePolicyRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteResourcePolicyError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Delete the resource-based policy attached to the specified registry.
source#### fn delete_schema<'life0, 'async_trait>( &'life0 self, input: DeleteSchemaRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Delete a schema definition.
source#### fn delete_schema_version<'life0, 'async_trait>( &'life0 self, input: DeleteSchemaVersionRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteSchemaVersionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Delete the schema version definition
source#### fn describe_code_binding<'life0, 'async_trait>( &'life0 self, input: DescribeCodeBindingRequest) -> Pin<Box<dyn Future<Output = Result<DescribeCodeBindingResponse, RusotoError<DescribeCodeBindingError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describe the code binding URI.
source#### fn describe_discoverer<'life0, 'async_trait>( &'life0 self, input: DescribeDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<DescribeDiscovererResponse, RusotoError<DescribeDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes the discoverer.
source#### fn describe_registry<'life0, 'async_trait>( &'life0 self, input: DescribeRegistryRequest) -> Pin<Box<dyn Future<Output = Result<DescribeRegistryResponse, RusotoError<DescribeRegistryError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Describes the registry.
source#### fn describe_schema<'life0, 'async_trait>( &'life0 self, input: DescribeSchemaRequest) -> Pin<Box<dyn Future<Output = Result<DescribeSchemaResponse, RusotoError<DescribeSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Retrieve the schema definition.
source#### fn export_schema<'life0, 'async_trait>( &'life0 self, input: ExportSchemaRequest) -> Pin<Box<dyn Future<Output = Result<ExportSchemaResponse, RusotoError<ExportSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
source#### fn get_code_binding_source<'life0, 'async_trait>( &'life0 self, input: GetCodeBindingSourceRequest) -> Pin<Box<dyn Future<Output = Result<GetCodeBindingSourceResponse, RusotoError<GetCodeBindingSourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Get the code binding source URI.
source#### fn get_discovered_schema<'life0, 'async_trait>( &'life0 self, input: GetDiscoveredSchemaRequest) -> Pin<Box<dyn Future<Output = Result<GetDiscoveredSchemaResponse, RusotoError<GetDiscoveredSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Get the discovered schema that was generated based on sampled events.
source#### fn get_resource_policy<'life0, 'async_trait>( &'life0 self, input: GetResourcePolicyRequest) -> Pin<Box<dyn Future<Output = Result<GetResourcePolicyResponse, RusotoError<GetResourcePolicyError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Retrieves the resource-based policy attached to a given registry.
source#### fn list_discoverers<'life0, 'async_trait>( &'life0 self, input: ListDiscoverersRequest) -> Pin<Box<dyn Future<Output = Result<ListDiscoverersResponse, RusotoError<ListDiscoverersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
List the discoverers.
source#### fn list_registries<'life0, 'async_trait>( &'life0 self, input: ListRegistriesRequest) -> Pin<Box<dyn Future<Output = Result<ListRegistriesResponse, RusotoError<ListRegistriesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
List the registries.
source#### fn list_schema_versions<'life0, 'async_trait>( &'life0 self, input: ListSchemaVersionsRequest) -> Pin<Box<dyn Future<Output = Result<ListSchemaVersionsResponse, RusotoError<ListSchemaVersionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Provides a list of the schema versions and related information.
source#### fn list_schemas<'life0, 'async_trait>( &'life0 self, input: ListSchemasRequest) -> Pin<Box<dyn Future<Output = Result<ListSchemasResponse, RusotoError<ListSchemasError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
List the schemas.
source#### fn list_tags_for_resource<'life0, 'async_trait>( &'life0 self, input: ListTagsForResourceRequest) -> Pin<Box<dyn Future<Output = Result<ListTagsForResourceResponse, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Get tags for resource.
source#### fn put_code_binding<'life0, 'async_trait>( &'life0 self, input: PutCodeBindingRequest) -> Pin<Box<dyn Future<Output = Result<PutCodeBindingResponse, RusotoError<PutCodeBindingError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Put code binding URI
source#### fn put_resource_policy<'life0, 'async_trait>( &'life0 self, input: PutResourcePolicyRequest) -> Pin<Box<dyn Future<Output = Result<PutResourcePolicyResponse, RusotoError<PutResourcePolicyError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
The name of the policy.
source#### fn search_schemas<'life0, 'async_trait>( &'life0 self, input: SearchSchemasRequest) -> Pin<Box<dyn Future<Output = Result<SearchSchemasResponse, RusotoError<SearchSchemasError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Search the schemas
source#### fn start_discoverer<'life0, 'async_trait>( &'life0 self, input: StartDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<StartDiscovererResponse, RusotoError<StartDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Starts the discoverer
source#### fn stop_discoverer<'life0, 'async_trait>( &'life0 self, input: StopDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<StopDiscovererResponse, RusotoError<StopDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Stops the discoverer
source#### fn tag_resource<'life0, 'async_trait>( &'life0 self, input: TagResourceRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<TagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Add tags to a resource.
source#### fn untag_resource<'life0, 'async_trait>( &'life0 self, input: UntagResourceRequest) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<UntagResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Removes tags from a resource.
source#### fn update_discoverer<'life0, 'async_trait>( &'life0 self, input: UpdateDiscovererRequest) -> Pin<Box<dyn Future<Output = Result<UpdateDiscovererResponse, RusotoError<UpdateDiscovererError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates the discoverer
source#### fn update_registry<'life0, 'async_trait>( &'life0 self, input: UpdateRegistryRequest) -> Pin<Box<dyn Future<Output = Result<UpdateRegistryResponse, RusotoError<UpdateRegistryError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates a registry.
source#### fn update_schema<'life0, 'async_trait>( &'life0 self, input: UpdateSchemaRequest) -> Pin<Box<dyn Future<Output = Result<UpdateSchemaResponse, RusotoError<UpdateSchemaError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Updates the schema definition
Inactive schemas will be deleted after two years.
Implementors
---
source### impl Schemas for SchemasClient
Struct rusoto_schemas::CreateDiscovererRequest
===
```
pub struct CreateDiscovererRequest {
pub description: Option<String>,
pub source_arn: String,
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`description: Option<String>`A description for the discoverer.
`source_arn: String`The ARN of the event bus.
`tags: Option<HashMap<String, String>>`Tags associated with the resource.
Trait Implementations
---
source### impl Clone for CreateDiscovererRequest
source#### fn clone(&self) -> CreateDiscovererRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDiscovererRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDiscovererRequest
source#### fn default() -> CreateDiscovererRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDiscovererRequest> for CreateDiscovererRequest
source#### fn eq(&self, other: &CreateDiscovererRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDiscovererRequest) -> bool
This method tests for `!=`.
source### impl Serialize for CreateDiscovererRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateDiscovererRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDiscovererRequest
### impl Send for CreateDiscovererRequest
### impl Sync for CreateDiscovererRequest
### impl Unpin for CreateDiscovererRequest
### impl UnwindSafe for CreateDiscovererRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::CreateDiscovererResponse
===
```
pub struct CreateDiscovererResponse {
pub description: Option<String>,
pub discoverer_arn: Option<String>,
pub discoverer_id: Option<String>,
pub source_arn: Option<String>,
pub state: Option<String>,
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`description: Option<String>`The description of the discoverer.
`discoverer_arn: Option<String>`The ARN of the discoverer.
`discoverer_id: Option<String>`The ID of the discoverer.
`source_arn: Option<String>`The ARN of the event bus.
`state: Option<String>`The state of the discoverer.
`tags: Option<HashMap<String, String>>`Tags associated with the resource.
Trait Implementations
---
source### impl Clone for CreateDiscovererResponse
source#### fn clone(&self) -> CreateDiscovererResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDiscovererResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDiscovererResponse
source#### fn default() -> CreateDiscovererResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateDiscovererResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateDiscovererResponse> for CreateDiscovererResponse
source#### fn eq(&self, other: &CreateDiscovererResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDiscovererResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDiscovererResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDiscovererResponse
### impl Send for CreateDiscovererResponse
### impl Sync for CreateDiscovererResponse
### impl Unpin for CreateDiscovererResponse
### impl UnwindSafe for CreateDiscovererResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::CreateRegistryRequest
===
```
pub struct CreateRegistryRequest {
pub description: Option<String>,
pub registry_name: String,
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`description: Option<String>`A description of the registry to be created.
`registry_name: String`The name of the registry.
`tags: Option<HashMap<String, String>>`Tags to associate with the registry.
Trait Implementations
---
source### impl Clone for CreateRegistryRequest
source#### fn clone(&self) -> CreateRegistryRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateRegistryRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateRegistryRequest
source#### fn default() -> CreateRegistryRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateRegistryRequest> for CreateRegistryRequest
source#### fn eq(&self, other: &CreateRegistryRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateRegistryRequest) -> bool
This method tests for `!=`.
source### impl Serialize for CreateRegistryRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateRegistryRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateRegistryRequest
### impl Send for CreateRegistryRequest
### impl Sync for CreateRegistryRequest
### impl Unpin for CreateRegistryRequest
### impl UnwindSafe for CreateRegistryRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::CreateRegistryResponse
===
```
pub struct CreateRegistryResponse {
pub description: Option<String>,
pub registry_arn: Option<String>,
pub registry_name: Option<String>,
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`description: Option<String>`The description of the registry.
`registry_arn: Option<String>`The ARN of the registry.
`registry_name: Option<String>`The name of the registry.
`tags: Option<HashMap<String, String>>`Tags associated with the registry.
Trait Implementations
---
source### impl Clone for CreateRegistryResponse
source#### fn clone(&self) -> CreateRegistryResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateRegistryResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateRegistryResponse
source#### fn default() -> CreateRegistryResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateRegistryResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateRegistryResponse> for CreateRegistryResponse
source#### fn eq(&self, other: &CreateRegistryResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateRegistryResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateRegistryResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateRegistryResponse
### impl Send for CreateRegistryResponse
### impl Sync for CreateRegistryResponse
### impl Unpin for CreateRegistryResponse
### impl UnwindSafe for CreateRegistryResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::CreateSchemaRequest
===
```
pub struct CreateSchemaRequest {
pub content: String,
pub description: Option<String>,
pub registry_name: String,
pub schema_name: String,
pub tags: Option<HashMap<String, String>>,
pub type_: String,
}
```
Fields
---
`content: String`The source of the schema definition.
`description: Option<String>`A description of the schema.
`registry_name: String`The name of the registry.
`schema_name: String`The name of the schema.
`tags: Option<HashMap<String, String>>`Tags associated with the schema.
`type_: String`The type of schema.
Trait Implementations
---
source### impl Clone for CreateSchemaRequest
source#### fn clone(&self) -> CreateSchemaRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateSchemaRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateSchemaRequest
source#### fn default() -> CreateSchemaRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateSchemaRequest> for CreateSchemaRequest
source#### fn eq(&self, other: &CreateSchemaRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateSchemaRequest) -> bool
This method tests for `!=`.
source### impl Serialize for CreateSchemaRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for CreateSchemaRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateSchemaRequest
### impl Send for CreateSchemaRequest
### impl Sync for CreateSchemaRequest
### impl Unpin for CreateSchemaRequest
### impl UnwindSafe for CreateSchemaRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::CreateSchemaResponse
===
```
pub struct CreateSchemaResponse {
pub description: Option<String>,
pub last_modified: Option<f64>,
pub schema_arn: Option<String>,
pub schema_name: Option<String>,
pub schema_version: Option<String>,
pub tags: Option<HashMap<String, String>>,
pub type_: Option<String>,
pub version_created_date: Option<f64>,
}
```
Fields
---
`description: Option<String>`The description of the schema.
`last_modified: Option<f64>`The date and time that schema was modified.
`schema_arn: Option<String>`The ARN of the schema.
`schema_name: Option<String>`The name of the schema.
`schema_version: Option<String>`The version number of the schema
`tags: Option<HashMap<String, String>>``type_: Option<String>`The type of the schema.
`version_created_date: Option<f64>`The date the schema version was created.
Trait Implementations
---
source### impl Clone for CreateSchemaResponse
source#### fn clone(&self) -> CreateSchemaResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateSchemaResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateSchemaResponse
source#### fn default() -> CreateSchemaResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for CreateSchemaResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<CreateSchemaResponse> for CreateSchemaResponse
source#### fn eq(&self, other: &CreateSchemaResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateSchemaResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateSchemaResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateSchemaResponse
### impl Send for CreateSchemaResponse
### impl Sync for CreateSchemaResponse
### impl Unpin for CreateSchemaResponse
### impl UnwindSafe for CreateSchemaResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::DeleteDiscovererRequest
===
```
pub struct DeleteDiscovererRequest {
pub discoverer_id: String,
}
```
Fields
---
`discoverer_id: String`The ID of the discoverer.
Trait Implementations
---
source### impl Clone for DeleteDiscovererRequest
source#### fn clone(&self) -> DeleteDiscovererRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteDiscovererRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteDiscovererRequest
source#### fn default() -> DeleteDiscovererRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteDiscovererRequest> for DeleteDiscovererRequest
source#### fn eq(&self, other: &DeleteDiscovererRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDiscovererRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteDiscovererRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteDiscovererRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDiscovererRequest
### impl Send for DeleteDiscovererRequest
### impl Sync for DeleteDiscovererRequest
### impl Unpin for DeleteDiscovererRequest
### impl UnwindSafe for DeleteDiscovererRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::DeleteRegistryRequest
===
```
pub struct DeleteRegistryRequest {
pub registry_name: String,
}
```
Fields
---
`registry_name: String`The name of the registry.
Trait Implementations
---
source### impl Clone for DeleteRegistryRequest
source#### fn clone(&self) -> DeleteRegistryRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteRegistryRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteRegistryRequest
source#### fn default() -> DeleteRegistryRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteRegistryRequest> for DeleteRegistryRequest
source#### fn eq(&self, other: &DeleteRegistryRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteRegistryRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteRegistryRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteRegistryRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteRegistryRequest
### impl Send for DeleteRegistryRequest
### impl Sync for DeleteRegistryRequest
### impl Unpin for DeleteRegistryRequest
### impl UnwindSafe for DeleteRegistryRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::DeleteResourcePolicyRequest
===
```
pub struct DeleteResourcePolicyRequest {
pub registry_name: Option<String>,
}
```
Fields
---
`registry_name: Option<String>`The name of the registry.
Trait Implementations
---
source### impl Clone for DeleteResourcePolicyRequest
source#### fn clone(&self) -> DeleteResourcePolicyRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteResourcePolicyRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteResourcePolicyRequest
source#### fn default() -> DeleteResourcePolicyRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteResourcePolicyRequest> for DeleteResourcePolicyRequest
source#### fn eq(&self, other: &DeleteResourcePolicyRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteResourcePolicyRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteResourcePolicyRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteResourcePolicyRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteResourcePolicyRequest
### impl Send for DeleteResourcePolicyRequest
### impl Sync for DeleteResourcePolicyRequest
### impl Unpin for DeleteResourcePolicyRequest
### impl UnwindSafe for DeleteResourcePolicyRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::DeleteSchemaRequest
===
```
pub struct DeleteSchemaRequest {
pub registry_name: String,
pub schema_name: String,
}
```
Fields
---
`registry_name: String`The name of the registry.
`schema_name: String`The name of the schema.
Trait Implementations
---
source### impl Clone for DeleteSchemaRequest
source#### fn clone(&self) -> DeleteSchemaRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteSchemaRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteSchemaRequest
source#### fn default() -> DeleteSchemaRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteSchemaRequest> for DeleteSchemaRequest
source#### fn eq(&self, other: &DeleteSchemaRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteSchemaRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteSchemaRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteSchemaRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteSchemaRequest
### impl Send for DeleteSchemaRequest
### impl Sync for DeleteSchemaRequest
### impl Unpin for DeleteSchemaRequest
### impl UnwindSafe for DeleteSchemaRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::DeleteSchemaVersionRequest
===
```
pub struct DeleteSchemaVersionRequest {
pub registry_name: String,
pub schema_name: String,
pub schema_version: String,
}
```
Fields
---
`registry_name: String`The name of the registry.
`schema_name: String`The name of the schema.
`schema_version: String`The version number of the schema
Trait Implementations
---
source### impl Clone for DeleteSchemaVersionRequest
source#### fn clone(&self) -> DeleteSchemaVersionRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteSchemaVersionRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteSchemaVersionRequest
source#### fn default() -> DeleteSchemaVersionRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteSchemaVersionRequest> for DeleteSchemaVersionRequest
source#### fn eq(&self, other: &DeleteSchemaVersionRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteSchemaVersionRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DeleteSchemaVersionRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DeleteSchemaVersionRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteSchemaVersionRequest
### impl Send for DeleteSchemaVersionRequest
### impl Sync for DeleteSchemaVersionRequest
### impl Unpin for DeleteSchemaVersionRequest
### impl UnwindSafe for DeleteSchemaVersionRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::DescribeCodeBindingRequest
===
```
pub struct DescribeCodeBindingRequest {
pub language: String,
pub registry_name: String,
pub schema_name: String,
pub schema_version: Option<String>,
}
```
Fields
---
`language: String`The language of the code binding.
`registry_name: String`The name of the registry.
`schema_name: String`The name of the schema.
`schema_version: Option<String>`Specifying this limits the results to only this schema version.
Trait Implementations
---
source### impl Clone for DescribeCodeBindingRequest
source#### fn clone(&self) -> DescribeCodeBindingRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeCodeBindingRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeCodeBindingRequest
source#### fn default() -> DescribeCodeBindingRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeCodeBindingRequest> for DescribeCodeBindingRequest
source#### fn eq(&self, other: &DescribeCodeBindingRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeCodeBindingRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeCodeBindingRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeCodeBindingRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeCodeBindingRequest
### impl Send for DescribeCodeBindingRequest
### impl Sync for DescribeCodeBindingRequest
### impl Unpin for DescribeCodeBindingRequest
### impl UnwindSafe for DescribeCodeBindingRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::DescribeCodeBindingResponse
===
```
pub struct DescribeCodeBindingResponse {
pub creation_date: Option<f64>,
pub last_modified: Option<f64>,
pub schema_version: Option<String>,
pub status: Option<String>,
}
```
Fields
---
`creation_date: Option<f64>`The time and date that the code binding was created.
`last_modified: Option<f64>`The date and time that code bindings were modified.
`schema_version: Option<String>`The version number of the schema.
`status: Option<String>`The current status of code binding generation.
Trait Implementations
---
source### impl Clone for DescribeCodeBindingResponse
source#### fn clone(&self) -> DescribeCodeBindingResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeCodeBindingResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeCodeBindingResponse
source#### fn default() -> DescribeCodeBindingResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeCodeBindingResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeCodeBindingResponse> for DescribeCodeBindingResponse
source#### fn eq(&self, other: &DescribeCodeBindingResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeCodeBindingResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeCodeBindingResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeCodeBindingResponse
### impl Send for DescribeCodeBindingResponse
### impl Sync for DescribeCodeBindingResponse
### impl Unpin for DescribeCodeBindingResponse
### impl UnwindSafe for DescribeCodeBindingResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::DescribeDiscovererRequest
===
```
pub struct DescribeDiscovererRequest {
pub discoverer_id: String,
}
```
Fields
---
`discoverer_id: String`The ID of the discoverer.
Trait Implementations
---
source### impl Clone for DescribeDiscovererRequest
source#### fn clone(&self) -> DescribeDiscovererRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDiscovererRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDiscovererRequest
source#### fn default() -> DescribeDiscovererRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeDiscovererRequest> for DescribeDiscovererRequest
source#### fn eq(&self, other: &DescribeDiscovererRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDiscovererRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeDiscovererRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeDiscovererRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDiscovererRequest
### impl Send for DescribeDiscovererRequest
### impl Sync for DescribeDiscovererRequest
### impl Unpin for DescribeDiscovererRequest
### impl UnwindSafe for DescribeDiscovererRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::DescribeDiscovererResponse
===
```
pub struct DescribeDiscovererResponse {
pub description: Option<String>,
pub discoverer_arn: Option<String>,
pub discoverer_id: Option<String>,
pub source_arn: Option<String>,
pub state: Option<String>,
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`description: Option<String>`The description of the discoverer.
`discoverer_arn: Option<String>`The ARN of the discoverer.
`discoverer_id: Option<String>`The ID of the discoverer.
`source_arn: Option<String>`The ARN of the event bus.
`state: Option<String>`The state of the discoverer.
`tags: Option<HashMap<String, String>>`Tags associated with the resource.
Trait Implementations
---
source### impl Clone for DescribeDiscovererResponse
source#### fn clone(&self) -> DescribeDiscovererResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDiscovererResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDiscovererResponse
source#### fn default() -> DescribeDiscovererResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeDiscovererResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeDiscovererResponse> for DescribeDiscovererResponse
source#### fn eq(&self, other: &DescribeDiscovererResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDiscovererResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDiscovererResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDiscovererResponse
### impl Send for DescribeDiscovererResponse
### impl Sync for DescribeDiscovererResponse
### impl Unpin for DescribeDiscovererResponse
### impl UnwindSafe for DescribeDiscovererResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::DescribeRegistryRequest
===
```
pub struct DescribeRegistryRequest {
pub registry_name: String,
}
```
Fields
---
`registry_name: String`The name of the registry.
Trait Implementations
---
source### impl Clone for DescribeRegistryRequest
source#### fn clone(&self) -> DescribeRegistryRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeRegistryRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeRegistryRequest
source#### fn default() -> DescribeRegistryRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeRegistryRequest> for DescribeRegistryRequest
source#### fn eq(&self, other: &DescribeRegistryRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeRegistryRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeRegistryRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeRegistryRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeRegistryRequest
### impl Send for DescribeRegistryRequest
### impl Sync for DescribeRegistryRequest
### impl Unpin for DescribeRegistryRequest
### impl UnwindSafe for DescribeRegistryRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::DescribeRegistryResponse
===
```
pub struct DescribeRegistryResponse {
pub description: Option<String>,
pub registry_arn: Option<String>,
pub registry_name: Option<String>,
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`description: Option<String>`The description of the registry.
`registry_arn: Option<String>`The ARN of the registry.
`registry_name: Option<String>`The name of the registry.
`tags: Option<HashMap<String, String>>`Tags associated with the registry.
Trait Implementations
---
source### impl Clone for DescribeRegistryResponse
source#### fn clone(&self) -> DescribeRegistryResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeRegistryResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeRegistryResponse
source#### fn default() -> DescribeRegistryResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeRegistryResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeRegistryResponse> for DescribeRegistryResponse
source#### fn eq(&self, other: &DescribeRegistryResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeRegistryResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeRegistryResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeRegistryResponse
### impl Send for DescribeRegistryResponse
### impl Sync for DescribeRegistryResponse
### impl Unpin for DescribeRegistryResponse
### impl UnwindSafe for DescribeRegistryResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::DescribeSchemaRequest
===
```
pub struct DescribeSchemaRequest {
pub registry_name: String,
pub schema_name: String,
pub schema_version: Option<String>,
}
```
Fields
---
`registry_name: String`The name of the registry.
`schema_name: String`The name of the schema.
`schema_version: Option<String>`Specifying this limits the results to only this schema version.
Trait Implementations
---
source### impl Clone for DescribeSchemaRequest
source#### fn clone(&self) -> DescribeSchemaRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeSchemaRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeSchemaRequest
source#### fn default() -> DescribeSchemaRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeSchemaRequest> for DescribeSchemaRequest
source#### fn eq(&self, other: &DescribeSchemaRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeSchemaRequest) -> bool
This method tests for `!=`.
source### impl Serialize for DescribeSchemaRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for DescribeSchemaRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeSchemaRequest
### impl Send for DescribeSchemaRequest
### impl Sync for DescribeSchemaRequest
### impl Unpin for DescribeSchemaRequest
### impl UnwindSafe for DescribeSchemaRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::DescribeSchemaResponse
===
```
pub struct DescribeSchemaResponse {
pub content: Option<String>,
pub description: Option<String>,
pub last_modified: Option<f64>,
pub schema_arn: Option<String>,
pub schema_name: Option<String>,
pub schema_version: Option<String>,
pub tags: Option<HashMap<String, String>>,
pub type_: Option<String>,
pub version_created_date: Option<f64>,
}
```
Fields
---
`content: Option<String>`The source of the schema definition.
`description: Option<String>`The description of the schema.
`last_modified: Option<f64>`The date and time that schema was modified.
`schema_arn: Option<String>`The ARN of the schema.
`schema_name: Option<String>`The name of the schema.
`schema_version: Option<String>`The version number of the schema
`tags: Option<HashMap<String, String>>`Tags associated with the resource.
`type_: Option<String>`The type of the schema.
`version_created_date: Option<f64>`The date the schema version was created.
Trait Implementations
---
source### impl Clone for DescribeSchemaResponse
source#### fn clone(&self) -> DescribeSchemaResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeSchemaResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeSchemaResponse
source#### fn default() -> DescribeSchemaResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DescribeSchemaResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DescribeSchemaResponse> for DescribeSchemaResponse
source#### fn eq(&self, other: &DescribeSchemaResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeSchemaResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeSchemaResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeSchemaResponse
### impl Send for DescribeSchemaResponse
### impl Sync for DescribeSchemaResponse
### impl Unpin for DescribeSchemaResponse
### impl UnwindSafe for DescribeSchemaResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::DiscovererSummary
===
```
pub struct DiscovererSummary {
pub discoverer_arn: Option<String>,
pub discoverer_id: Option<String>,
pub source_arn: Option<String>,
pub state: Option<String>,
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`discoverer_arn: Option<String>`The ARN of the discoverer.
`discoverer_id: Option<String>`The ID of the discoverer.
`source_arn: Option<String>`The ARN of the event bus.
`state: Option<String>`The state of the discoverer.
`tags: Option<HashMap<String, String>>`Tags associated with the resource.
Trait Implementations
---
source### impl Clone for DiscovererSummary
source#### fn clone(&self) -> DiscovererSummary
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DiscovererSummary
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DiscovererSummary
source#### fn default() -> DiscovererSummary
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for DiscovererSummary
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<DiscovererSummary> for DiscovererSummary
source#### fn eq(&self, other: &DiscovererSummary) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DiscovererSummary) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DiscovererSummary
Auto Trait Implementations
---
### impl RefUnwindSafe for DiscovererSummary
### impl Send for DiscovererSummary
### impl Sync for DiscovererSummary
### impl Unpin for DiscovererSummary
### impl UnwindSafe for DiscovererSummary
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::ExportSchemaRequest
===
```
pub struct ExportSchemaRequest {
pub registry_name: String,
pub schema_name: String,
pub schema_version: Option<String>,
pub type_: String,
}
```
Fields
---
`registry_name: String`The name of the registry.
`schema_name: String`The name of the schema.
`schema_version: Option<String>`Specifying this limits the results to only this schema version.
`type_: String`Trait Implementations
---
source### impl Clone for ExportSchemaRequest
source#### fn clone(&self) -> ExportSchemaRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ExportSchemaRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ExportSchemaRequest
source#### fn default() -> ExportSchemaRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ExportSchemaRequest> for ExportSchemaRequest
source#### fn eq(&self, other: &ExportSchemaRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ExportSchemaRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ExportSchemaRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ExportSchemaRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ExportSchemaRequest
### impl Send for ExportSchemaRequest
### impl Sync for ExportSchemaRequest
### impl Unpin for ExportSchemaRequest
### impl UnwindSafe for ExportSchemaRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::ExportSchemaResponse
===
```
pub struct ExportSchemaResponse {
pub content: Option<String>,
pub schema_arn: Option<String>,
pub schema_name: Option<String>,
pub schema_version: Option<String>,
pub type_: Option<String>,
}
```
Fields
---
`content: Option<String>``schema_arn: Option<String>``schema_name: Option<String>``schema_version: Option<String>``type_: Option<String>`Trait Implementations
---
source### impl Clone for ExportSchemaResponse
source#### fn clone(&self) -> ExportSchemaResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ExportSchemaResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ExportSchemaResponse
source#### fn default() -> ExportSchemaResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ExportSchemaResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ExportSchemaResponse> for ExportSchemaResponse
source#### fn eq(&self, other: &ExportSchemaResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ExportSchemaResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ExportSchemaResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ExportSchemaResponse
### impl Send for ExportSchemaResponse
### impl Sync for ExportSchemaResponse
### impl Unpin for ExportSchemaResponse
### impl UnwindSafe for ExportSchemaResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::GetCodeBindingSourceRequest
===
```
pub struct GetCodeBindingSourceRequest {
pub language: String,
pub registry_name: String,
pub schema_name: String,
pub schema_version: Option<String>,
}
```
Fields
---
`language: String`The language of the code binding.
`registry_name: String`The name of the registry.
`schema_name: String`The name of the schema.
`schema_version: Option<String>`Specifying this limits the results to only this schema version.
Trait Implementations
---
source### impl Clone for GetCodeBindingSourceRequest
source#### fn clone(&self) -> GetCodeBindingSourceRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetCodeBindingSourceRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetCodeBindingSourceRequest
source#### fn default() -> GetCodeBindingSourceRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<GetCodeBindingSourceRequest> for GetCodeBindingSourceRequest
source#### fn eq(&self, other: &GetCodeBindingSourceRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetCodeBindingSourceRequest) -> bool
This method tests for `!=`.
source### impl Serialize for GetCodeBindingSourceRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GetCodeBindingSourceRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for GetCodeBindingSourceRequest
### impl Send for GetCodeBindingSourceRequest
### impl Sync for GetCodeBindingSourceRequest
### impl Unpin for GetCodeBindingSourceRequest
### impl UnwindSafe for GetCodeBindingSourceRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::GetCodeBindingSourceResponse
===
```
pub struct GetCodeBindingSourceResponse {
pub body: Option<Bytes>,
}
```
Fields
---
`body: Option<Bytes>`Trait Implementations
---
source### impl Clone for GetCodeBindingSourceResponse
source#### fn clone(&self) -> GetCodeBindingSourceResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetCodeBindingSourceResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetCodeBindingSourceResponse
source#### fn default() -> GetCodeBindingSourceResponse
Returns the “default value” for a type. Read more
source### impl PartialEq<GetCodeBindingSourceResponse> for GetCodeBindingSourceResponse
source#### fn eq(&self, other: &GetCodeBindingSourceResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetCodeBindingSourceResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetCodeBindingSourceResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for GetCodeBindingSourceResponse
### impl Send for GetCodeBindingSourceResponse
### impl Sync for GetCodeBindingSourceResponse
### impl Unpin for GetCodeBindingSourceResponse
### impl UnwindSafe for GetCodeBindingSourceResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::GetDiscoveredSchemaRequest
===
```
pub struct GetDiscoveredSchemaRequest {
pub events: Vec<String>,
pub type_: String,
}
```
Fields
---
`events: Vec<String>`An array of strings where each string is a JSON event. These are the events that were used to generate the schema. The array includes a single type of event and has a maximum size of 10 events.
`type_: String`The type of event.
Trait Implementations
---
source### impl Clone for GetDiscoveredSchemaRequest
source#### fn clone(&self) -> GetDiscoveredSchemaRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetDiscoveredSchemaRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetDiscoveredSchemaRequest
source#### fn default() -> GetDiscoveredSchemaRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<GetDiscoveredSchemaRequest> for GetDiscoveredSchemaRequest
source#### fn eq(&self, other: &GetDiscoveredSchemaRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetDiscoveredSchemaRequest) -> bool
This method tests for `!=`.
source### impl Serialize for GetDiscoveredSchemaRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GetDiscoveredSchemaRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for GetDiscoveredSchemaRequest
### impl Send for GetDiscoveredSchemaRequest
### impl Sync for GetDiscoveredSchemaRequest
### impl Unpin for GetDiscoveredSchemaRequest
### impl UnwindSafe for GetDiscoveredSchemaRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::GetDiscoveredSchemaResponse
===
```
pub struct GetDiscoveredSchemaResponse {
pub content: Option<String>,
}
```
Fields
---
`content: Option<String>`The source of the schema definition.
Trait Implementations
---
source### impl Clone for GetDiscoveredSchemaResponse
source#### fn clone(&self) -> GetDiscoveredSchemaResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetDiscoveredSchemaResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetDiscoveredSchemaResponse
source#### fn default() -> GetDiscoveredSchemaResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GetDiscoveredSchemaResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GetDiscoveredSchemaResponse> for GetDiscoveredSchemaResponse
source#### fn eq(&self, other: &GetDiscoveredSchemaResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetDiscoveredSchemaResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetDiscoveredSchemaResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for GetDiscoveredSchemaResponse
### impl Send for GetDiscoveredSchemaResponse
### impl Sync for GetDiscoveredSchemaResponse
### impl Unpin for GetDiscoveredSchemaResponse
### impl UnwindSafe for GetDiscoveredSchemaResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::GetResourcePolicyRequest
===
```
pub struct GetResourcePolicyRequest {
pub registry_name: Option<String>,
}
```
Fields
---
`registry_name: Option<String>`The name of the registry.
Trait Implementations
---
source### impl Clone for GetResourcePolicyRequest
source#### fn clone(&self) -> GetResourcePolicyRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetResourcePolicyRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetResourcePolicyRequest
source#### fn default() -> GetResourcePolicyRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<GetResourcePolicyRequest> for GetResourcePolicyRequest
source#### fn eq(&self, other: &GetResourcePolicyRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourcePolicyRequest) -> bool
This method tests for `!=`.
source### impl Serialize for GetResourcePolicyRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for GetResourcePolicyRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourcePolicyRequest
### impl Send for GetResourcePolicyRequest
### impl Sync for GetResourcePolicyRequest
### impl Unpin for GetResourcePolicyRequest
### impl UnwindSafe for GetResourcePolicyRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::GetResourcePolicyResponse
===
```
pub struct GetResourcePolicyResponse {
pub policy: Option<String>,
pub revision_id: Option<String>,
}
```
Fields
---
`policy: Option<String>`The resource-based policy.
`revision_id: Option<String>`The revision ID.
Trait Implementations
---
source### impl Clone for GetResourcePolicyResponse
source#### fn clone(&self) -> GetResourcePolicyResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GetResourcePolicyResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GetResourcePolicyResponse
source#### fn default() -> GetResourcePolicyResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for GetResourcePolicyResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<GetResourcePolicyResponse> for GetResourcePolicyResponse
source#### fn eq(&self, other: &GetResourcePolicyResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourcePolicyResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetResourcePolicyResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourcePolicyResponse
### impl Send for GetResourcePolicyResponse
### impl Sync for GetResourcePolicyResponse
### impl Unpin for GetResourcePolicyResponse
### impl UnwindSafe for GetResourcePolicyResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::ListDiscoverersRequest
===
```
pub struct ListDiscoverersRequest {
pub discoverer_id_prefix: Option<String>,
pub limit: Option<i64>,
pub next_token: Option<String>,
pub source_arn_prefix: Option<String>,
}
```
Fields
---
`discoverer_id_prefix: Option<String>`Specifying this limits the results to only those discoverer IDs that start with the specified prefix.
`limit: Option<i64>``next_token: Option<String>`The token that specifies the next page of results to return. To request the first page, leave NextToken empty. The token will expire in 24 hours, and cannot be shared with other accounts.
`source_arn_prefix: Option<String>`Specifying this limits the results to only those ARNs that start with the specified prefix.
Trait Implementations
---
source### impl Clone for ListDiscoverersRequest
source#### fn clone(&self) -> ListDiscoverersRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListDiscoverersRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListDiscoverersRequest
source#### fn default() -> ListDiscoverersRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListDiscoverersRequest> for ListDiscoverersRequest
source#### fn eq(&self, other: &ListDiscoverersRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListDiscoverersRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListDiscoverersRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListDiscoverersRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListDiscoverersRequest
### impl Send for ListDiscoverersRequest
### impl Sync for ListDiscoverersRequest
### impl Unpin for ListDiscoverersRequest
### impl UnwindSafe for ListDiscoverersRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::ListDiscoverersResponse
===
```
pub struct ListDiscoverersResponse {
pub discoverers: Option<Vec<DiscovererSummary>>,
pub next_token: Option<String>,
}
```
Fields
---
`discoverers: Option<Vec<DiscovererSummary>>`An array of DiscovererSummary information.
`next_token: Option<String>`The token that specifies the next page of results to return. To request the first page, leave NextToken empty. The token will expire in 24 hours, and cannot be shared with other accounts.
Trait Implementations
---
source### impl Clone for ListDiscoverersResponse
source#### fn clone(&self) -> ListDiscoverersResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListDiscoverersResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListDiscoverersResponse
source#### fn default() -> ListDiscoverersResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListDiscoverersResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListDiscoverersResponse> for ListDiscoverersResponse
source#### fn eq(&self, other: &ListDiscoverersResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListDiscoverersResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListDiscoverersResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListDiscoverersResponse
### impl Send for ListDiscoverersResponse
### impl Sync for ListDiscoverersResponse
### impl Unpin for ListDiscoverersResponse
### impl UnwindSafe for ListDiscoverersResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::ListRegistriesRequest
===
```
pub struct ListRegistriesRequest {
pub limit: Option<i64>,
pub next_token: Option<String>,
pub registry_name_prefix: Option<String>,
pub scope: Option<String>,
}
```
Fields
---
`limit: Option<i64>``next_token: Option<String>`The token that specifies the next page of results to return. To request the first page, leave NextToken empty. The token will expire in 24 hours, and cannot be shared with other accounts.
`registry_name_prefix: Option<String>`Specifying this limits the results to only those registry names that start with the specified prefix.
`scope: Option<String>`Can be set to Local or AWS to limit responses to your custom registries, or the ones provided by AWS.
Trait Implementations
---
source### impl Clone for ListRegistriesRequest
source#### fn clone(&self) -> ListRegistriesRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListRegistriesRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListRegistriesRequest
source#### fn default() -> ListRegistriesRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListRegistriesRequest> for ListRegistriesRequest
source#### fn eq(&self, other: &ListRegistriesRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListRegistriesRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListRegistriesRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListRegistriesRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListRegistriesRequest
### impl Send for ListRegistriesRequest
### impl Sync for ListRegistriesRequest
### impl Unpin for ListRegistriesRequest
### impl UnwindSafe for ListRegistriesRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::ListRegistriesResponse
===
```
pub struct ListRegistriesResponse {
pub next_token: Option<String>,
pub registries: Option<Vec<RegistrySummary>>,
}
```
Fields
---
`next_token: Option<String>`The token that specifies the next page of results to return. To request the first page, leave NextToken empty. The token will expire in 24 hours, and cannot be shared with other accounts.
`registries: Option<Vec<RegistrySummary>>`An array of registry summaries.
Trait Implementations
---
source### impl Clone for ListRegistriesResponse
source#### fn clone(&self) -> ListRegistriesResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListRegistriesResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListRegistriesResponse
source#### fn default() -> ListRegistriesResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListRegistriesResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListRegistriesResponse> for ListRegistriesResponse
source#### fn eq(&self, other: &ListRegistriesResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListRegistriesResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListRegistriesResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListRegistriesResponse
### impl Send for ListRegistriesResponse
### impl Sync for ListRegistriesResponse
### impl Unpin for ListRegistriesResponse
### impl UnwindSafe for ListRegistriesResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::ListSchemaVersionsRequest
===
```
pub struct ListSchemaVersionsRequest {
pub limit: Option<i64>,
pub next_token: Option<String>,
pub registry_name: String,
pub schema_name: String,
}
```
Fields
---
`limit: Option<i64>``next_token: Option<String>`The token that specifies the next page of results to return. To request the first page, leave NextToken empty. The token will expire in 24 hours, and cannot be shared with other accounts.
`registry_name: String`The name of the registry.
`schema_name: String`The name of the schema.
Trait Implementations
---
source### impl Clone for ListSchemaVersionsRequest
source#### fn clone(&self) -> ListSchemaVersionsRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListSchemaVersionsRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListSchemaVersionsRequest
source#### fn default() -> ListSchemaVersionsRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListSchemaVersionsRequest> for ListSchemaVersionsRequest
source#### fn eq(&self, other: &ListSchemaVersionsRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListSchemaVersionsRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListSchemaVersionsRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListSchemaVersionsRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListSchemaVersionsRequest
### impl Send for ListSchemaVersionsRequest
### impl Sync for ListSchemaVersionsRequest
### impl Unpin for ListSchemaVersionsRequest
### impl UnwindSafe for ListSchemaVersionsRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::ListSchemaVersionsResponse
===
```
pub struct ListSchemaVersionsResponse {
pub next_token: Option<String>,
pub schema_versions: Option<Vec<SchemaVersionSummary>>,
}
```
Fields
---
`next_token: Option<String>`The token that specifies the next page of results to return. To request the first page, leave NextToken empty. The token will expire in 24 hours, and cannot be shared with other accounts.
`schema_versions: Option<Vec<SchemaVersionSummary>>`An array of schema version summaries.
Trait Implementations
---
source### impl Clone for ListSchemaVersionsResponse
source#### fn clone(&self) -> ListSchemaVersionsResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListSchemaVersionsResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListSchemaVersionsResponse
source#### fn default() -> ListSchemaVersionsResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListSchemaVersionsResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListSchemaVersionsResponse> for ListSchemaVersionsResponse
source#### fn eq(&self, other: &ListSchemaVersionsResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListSchemaVersionsResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListSchemaVersionsResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListSchemaVersionsResponse
### impl Send for ListSchemaVersionsResponse
### impl Sync for ListSchemaVersionsResponse
### impl Unpin for ListSchemaVersionsResponse
### impl UnwindSafe for ListSchemaVersionsResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::ListSchemasRequest
===
```
pub struct ListSchemasRequest {
pub limit: Option<i64>,
pub next_token: Option<String>,
pub registry_name: String,
pub schema_name_prefix: Option<String>,
}
```
Fields
---
`limit: Option<i64>``next_token: Option<String>`The token that specifies the next page of results to return. To request the first page, leave NextToken empty. The token will expire in 24 hours, and cannot be shared with other accounts.
`registry_name: String`The name of the registry.
`schema_name_prefix: Option<String>`Specifying this limits the results to only those schema names that start with the specified prefix.
Trait Implementations
---
source### impl Clone for ListSchemasRequest
source#### fn clone(&self) -> ListSchemasRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListSchemasRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListSchemasRequest
source#### fn default() -> ListSchemasRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListSchemasRequest> for ListSchemasRequest
source#### fn eq(&self, other: &ListSchemasRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListSchemasRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListSchemasRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListSchemasRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListSchemasRequest
### impl Send for ListSchemasRequest
### impl Sync for ListSchemasRequest
### impl Unpin for ListSchemasRequest
### impl UnwindSafe for ListSchemasRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::ListSchemasResponse
===
```
pub struct ListSchemasResponse {
pub next_token: Option<String>,
pub schemas: Option<Vec<SchemaSummary>>,
}
```
Fields
---
`next_token: Option<String>`The token that specifies the next page of results to return. To request the first page, leave NextToken empty. The token will expire in 24 hours, and cannot be shared with other accounts.
`schemas: Option<Vec<SchemaSummary>>`An array of schema summaries.
Trait Implementations
---
source### impl Clone for ListSchemasResponse
source#### fn clone(&self) -> ListSchemasResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListSchemasResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListSchemasResponse
source#### fn default() -> ListSchemasResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListSchemasResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListSchemasResponse> for ListSchemasResponse
source#### fn eq(&self, other: &ListSchemasResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListSchemasResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListSchemasResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListSchemasResponse
### impl Send for ListSchemasResponse
### impl Sync for ListSchemasResponse
### impl Unpin for ListSchemasResponse
### impl UnwindSafe for ListSchemasResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::ListTagsForResourceRequest
===
```
pub struct ListTagsForResourceRequest {
pub resource_arn: String,
}
```
Fields
---
`resource_arn: String`The ARN of the resource.
Trait Implementations
---
source### impl Clone for ListTagsForResourceRequest
source#### fn clone(&self) -> ListTagsForResourceRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListTagsForResourceRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListTagsForResourceRequest
source#### fn default() -> ListTagsForResourceRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<ListTagsForResourceRequest> for ListTagsForResourceRequest
source#### fn eq(&self, other: &ListTagsForResourceRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListTagsForResourceRequest) -> bool
This method tests for `!=`.
source### impl Serialize for ListTagsForResourceRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for ListTagsForResourceRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for ListTagsForResourceRequest
### impl Send for ListTagsForResourceRequest
### impl Sync for ListTagsForResourceRequest
### impl Unpin for ListTagsForResourceRequest
### impl UnwindSafe for ListTagsForResourceRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::ListTagsForResourceResponse
===
```
pub struct ListTagsForResourceResponse {
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`tags: Option<HashMap<String, String>>`Trait Implementations
---
source### impl Clone for ListTagsForResourceResponse
source#### fn clone(&self) -> ListTagsForResourceResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListTagsForResourceResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListTagsForResourceResponse
source#### fn default() -> ListTagsForResourceResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for ListTagsForResourceResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<ListTagsForResourceResponse> for ListTagsForResourceResponse
source#### fn eq(&self, other: &ListTagsForResourceResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListTagsForResourceResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListTagsForResourceResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for ListTagsForResourceResponse
### impl Send for ListTagsForResourceResponse
### impl Sync for ListTagsForResourceResponse
### impl Unpin for ListTagsForResourceResponse
### impl UnwindSafe for ListTagsForResourceResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::PutCodeBindingRequest
===
```
pub struct PutCodeBindingRequest {
pub language: String,
pub registry_name: String,
pub schema_name: String,
pub schema_version: Option<String>,
}
```
Fields
---
`language: String`The language of the code binding.
`registry_name: String`The name of the registry.
`schema_name: String`The name of the schema.
`schema_version: Option<String>`Specifying this limits the results to only this schema version.
Trait Implementations
---
source### impl Clone for PutCodeBindingRequest
source#### fn clone(&self) -> PutCodeBindingRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PutCodeBindingRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PutCodeBindingRequest
source#### fn default() -> PutCodeBindingRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<PutCodeBindingRequest> for PutCodeBindingRequest
source#### fn eq(&self, other: &PutCodeBindingRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PutCodeBindingRequest) -> bool
This method tests for `!=`.
source### impl Serialize for PutCodeBindingRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for PutCodeBindingRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for PutCodeBindingRequest
### impl Send for PutCodeBindingRequest
### impl Sync for PutCodeBindingRequest
### impl Unpin for PutCodeBindingRequest
### impl UnwindSafe for PutCodeBindingRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::PutCodeBindingResponse
===
```
pub struct PutCodeBindingResponse {
pub creation_date: Option<f64>,
pub last_modified: Option<f64>,
pub schema_version: Option<String>,
pub status: Option<String>,
}
```
Fields
---
`creation_date: Option<f64>`The time and date that the code binding was created.
`last_modified: Option<f64>`The date and time that code bindings were modified.
`schema_version: Option<String>`The version number of the schema.
`status: Option<String>`The current status of code binding generation.
Trait Implementations
---
source### impl Clone for PutCodeBindingResponse
source#### fn clone(&self) -> PutCodeBindingResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PutCodeBindingResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PutCodeBindingResponse
source#### fn default() -> PutCodeBindingResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for PutCodeBindingResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<PutCodeBindingResponse> for PutCodeBindingResponse
source#### fn eq(&self, other: &PutCodeBindingResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PutCodeBindingResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for PutCodeBindingResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for PutCodeBindingResponse
### impl Send for PutCodeBindingResponse
### impl Sync for PutCodeBindingResponse
### impl Unpin for PutCodeBindingResponse
### impl UnwindSafe for PutCodeBindingResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::PutResourcePolicyRequest
===
```
pub struct PutResourcePolicyRequest {
pub policy: String,
pub registry_name: Option<String>,
pub revision_id: Option<String>,
}
```
The name of the policy.
Fields
---
`policy: String`The resource-based policy.
`registry_name: Option<String>`The name of the registry.
`revision_id: Option<String>`The revision ID of the policy.
Trait Implementations
---
source### impl Clone for PutResourcePolicyRequest
source#### fn clone(&self) -> PutResourcePolicyRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PutResourcePolicyRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PutResourcePolicyRequest
source#### fn default() -> PutResourcePolicyRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<PutResourcePolicyRequest> for PutResourcePolicyRequest
source#### fn eq(&self, other: &PutResourcePolicyRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PutResourcePolicyRequest) -> bool
This method tests for `!=`.
source### impl Serialize for PutResourcePolicyRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for PutResourcePolicyRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for PutResourcePolicyRequest
### impl Send for PutResourcePolicyRequest
### impl Sync for PutResourcePolicyRequest
### impl Unpin for PutResourcePolicyRequest
### impl UnwindSafe for PutResourcePolicyRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::PutResourcePolicyResponse
===
```
pub struct PutResourcePolicyResponse {
pub policy: Option<String>,
pub revision_id: Option<String>,
}
```
Fields
---
`policy: Option<String>`The resource-based policy.
`revision_id: Option<String>`The revision ID of the policy.
Trait Implementations
---
source### impl Clone for PutResourcePolicyResponse
source#### fn clone(&self) -> PutResourcePolicyResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PutResourcePolicyResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PutResourcePolicyResponse
source#### fn default() -> PutResourcePolicyResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for PutResourcePolicyResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<PutResourcePolicyResponse> for PutResourcePolicyResponse
source#### fn eq(&self, other: &PutResourcePolicyResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PutResourcePolicyResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for PutResourcePolicyResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for PutResourcePolicyResponse
### impl Send for PutResourcePolicyResponse
### impl Sync for PutResourcePolicyResponse
### impl Unpin for PutResourcePolicyResponse
### impl UnwindSafe for PutResourcePolicyResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::RegistrySummary
===
```
pub struct RegistrySummary {
pub registry_arn: Option<String>,
pub registry_name: Option<String>,
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`registry_arn: Option<String>`The ARN of the registry.
`registry_name: Option<String>`The name of the registry.
`tags: Option<HashMap<String, String>>`Tags associated with the registry.
Trait Implementations
---
source### impl Clone for RegistrySummary
source#### fn clone(&self) -> RegistrySummary
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RegistrySummary
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RegistrySummary
source#### fn default() -> RegistrySummary
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for RegistrySummary
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<RegistrySummary> for RegistrySummary
source#### fn eq(&self, other: &RegistrySummary) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RegistrySummary) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RegistrySummary
Auto Trait Implementations
---
### impl RefUnwindSafe for RegistrySummary
### impl Send for RegistrySummary
### impl Sync for RegistrySummary
### impl Unpin for RegistrySummary
### impl UnwindSafe for RegistrySummary
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::SchemaSummary
===
```
pub struct SchemaSummary {
pub last_modified: Option<f64>,
pub schema_arn: Option<String>,
pub schema_name: Option<String>,
pub tags: Option<HashMap<String, String>>,
pub version_count: Option<i64>,
}
```
A summary of schema details.
Fields
---
`last_modified: Option<f64>`The date and time that schema was modified.
`schema_arn: Option<String>`The ARN of the schema.
`schema_name: Option<String>`The name of the schema.
`tags: Option<HashMap<String, String>>`Tags associated with the schema.
`version_count: Option<i64>`The number of versions available for the schema.
Trait Implementations
---
source### impl Clone for SchemaSummary
source#### fn clone(&self) -> SchemaSummary
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for SchemaSummary
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for SchemaSummary
source#### fn default() -> SchemaSummary
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for SchemaSummary
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<SchemaSummary> for SchemaSummary
source#### fn eq(&self, other: &SchemaSummary) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &SchemaSummary) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for SchemaSummary
Auto Trait Implementations
---
### impl RefUnwindSafe for SchemaSummary
### impl Send for SchemaSummary
### impl Sync for SchemaSummary
### impl Unpin for SchemaSummary
### impl UnwindSafe for SchemaSummary
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::SchemaVersionSummary
===
```
pub struct SchemaVersionSummary {
pub schema_arn: Option<String>,
pub schema_name: Option<String>,
pub schema_version: Option<String>,
pub type_: Option<String>,
}
```
Fields
---
`schema_arn: Option<String>`The ARN of the schema version.
`schema_name: Option<String>`The name of the schema.
`schema_version: Option<String>`The version number of the schema.
`type_: Option<String>`The type of schema.
Trait Implementations
---
source### impl Clone for SchemaVersionSummary
source#### fn clone(&self) -> SchemaVersionSummary
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for SchemaVersionSummary
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for SchemaVersionSummary
source#### fn default() -> SchemaVersionSummary
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for SchemaVersionSummary
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<SchemaVersionSummary> for SchemaVersionSummary
source#### fn eq(&self, other: &SchemaVersionSummary) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &SchemaVersionSummary) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for SchemaVersionSummary
Auto Trait Implementations
---
### impl RefUnwindSafe for SchemaVersionSummary
### impl Send for SchemaVersionSummary
### impl Sync for SchemaVersionSummary
### impl Unpin for SchemaVersionSummary
### impl UnwindSafe for SchemaVersionSummary
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::SearchSchemaSummary
===
```
pub struct SearchSchemaSummary {
pub registry_name: Option<String>,
pub schema_arn: Option<String>,
pub schema_name: Option<String>,
pub schema_versions: Option<Vec<SearchSchemaVersionSummary>>,
}
```
Fields
---
`registry_name: Option<String>`The name of the registry.
`schema_arn: Option<String>`The ARN of the schema.
`schema_name: Option<String>`The name of the schema.
`schema_versions: Option<Vec<SearchSchemaVersionSummary>>`An array of schema version summaries.
Trait Implementations
---
source### impl Clone for SearchSchemaSummary
source#### fn clone(&self) -> SearchSchemaSummary
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for SearchSchemaSummary
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for SearchSchemaSummary
source#### fn default() -> SearchSchemaSummary
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for SearchSchemaSummary
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<SearchSchemaSummary> for SearchSchemaSummary
source#### fn eq(&self, other: &SearchSchemaSummary) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &SearchSchemaSummary) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for SearchSchemaSummary
Auto Trait Implementations
---
### impl RefUnwindSafe for SearchSchemaSummary
### impl Send for SearchSchemaSummary
### impl Sync for SearchSchemaSummary
### impl Unpin for SearchSchemaSummary
### impl UnwindSafe for SearchSchemaSummary
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::SearchSchemaVersionSummary
===
```
pub struct SearchSchemaVersionSummary {
pub created_date: Option<f64>,
pub schema_version: Option<String>,
pub type_: Option<String>,
}
```
Fields
---
`created_date: Option<f64>`The date the schema version was created.
`schema_version: Option<String>`The version number of the schema
`type_: Option<String>`The type of schema.
Trait Implementations
---
source### impl Clone for SearchSchemaVersionSummary
source#### fn clone(&self) -> SearchSchemaVersionSummary
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for SearchSchemaVersionSummary
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for SearchSchemaVersionSummary
source#### fn default() -> SearchSchemaVersionSummary
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for SearchSchemaVersionSummary
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<SearchSchemaVersionSummary> for SearchSchemaVersionSummary
source#### fn eq(&self, other: &SearchSchemaVersionSummary) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &SearchSchemaVersionSummary) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for SearchSchemaVersionSummary
Auto Trait Implementations
---
### impl RefUnwindSafe for SearchSchemaVersionSummary
### impl Send for SearchSchemaVersionSummary
### impl Sync for SearchSchemaVersionSummary
### impl Unpin for SearchSchemaVersionSummary
### impl UnwindSafe for SearchSchemaVersionSummary
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::SearchSchemasRequest
===
```
pub struct SearchSchemasRequest {
pub keywords: String,
pub limit: Option<i64>,
pub next_token: Option<String>,
pub registry_name: String,
}
```
Fields
---
`keywords: String`Specifying this limits the results to only schemas that include the provided keywords.
`limit: Option<i64>``next_token: Option<String>`The token that specifies the next page of results to return. To request the first page, leave NextToken empty. The token will expire in 24 hours, and cannot be shared with other accounts.
`registry_name: String`The name of the registry.
Trait Implementations
---
source### impl Clone for SearchSchemasRequest
source#### fn clone(&self) -> SearchSchemasRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for SearchSchemasRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for SearchSchemasRequest
source#### fn default() -> SearchSchemasRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<SearchSchemasRequest> for SearchSchemasRequest
source#### fn eq(&self, other: &SearchSchemasRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &SearchSchemasRequest) -> bool
This method tests for `!=`.
source### impl Serialize for SearchSchemasRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for SearchSchemasRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for SearchSchemasRequest
### impl Send for SearchSchemasRequest
### impl Sync for SearchSchemasRequest
### impl Unpin for SearchSchemasRequest
### impl UnwindSafe for SearchSchemasRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::SearchSchemasResponse
===
```
pub struct SearchSchemasResponse {
pub next_token: Option<String>,
pub schemas: Option<Vec<SearchSchemaSummary>>,
}
```
Fields
---
`next_token: Option<String>`The token that specifies the next page of results to return. To request the first page, leave NextToken empty. The token will expire in 24 hours, and cannot be shared with other accounts.
`schemas: Option<Vec<SearchSchemaSummary>>`An array of SearchSchemaSummary information.
Trait Implementations
---
source### impl Clone for SearchSchemasResponse
source#### fn clone(&self) -> SearchSchemasResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for SearchSchemasResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for SearchSchemasResponse
source#### fn default() -> SearchSchemasResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for SearchSchemasResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<SearchSchemasResponse> for SearchSchemasResponse
source#### fn eq(&self, other: &SearchSchemasResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &SearchSchemasResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for SearchSchemasResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for SearchSchemasResponse
### impl Send for SearchSchemasResponse
### impl Sync for SearchSchemasResponse
### impl Unpin for SearchSchemasResponse
### impl UnwindSafe for SearchSchemasResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::StartDiscovererRequest
===
```
pub struct StartDiscovererRequest {
pub discoverer_id: String,
}
```
Fields
---
`discoverer_id: String`The ID of the discoverer.
Trait Implementations
---
source### impl Clone for StartDiscovererRequest
source#### fn clone(&self) -> StartDiscovererRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for StartDiscovererRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for StartDiscovererRequest
source#### fn default() -> StartDiscovererRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<StartDiscovererRequest> for StartDiscovererRequest
source#### fn eq(&self, other: &StartDiscovererRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StartDiscovererRequest) -> bool
This method tests for `!=`.
source### impl Serialize for StartDiscovererRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for StartDiscovererRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for StartDiscovererRequest
### impl Send for StartDiscovererRequest
### impl Sync for StartDiscovererRequest
### impl Unpin for StartDiscovererRequest
### impl UnwindSafe for StartDiscovererRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::StartDiscovererResponse
===
```
pub struct StartDiscovererResponse {
pub discoverer_id: Option<String>,
pub state: Option<String>,
}
```
Fields
---
`discoverer_id: Option<String>`The ID of the discoverer.
`state: Option<String>`The state of the discoverer.
Trait Implementations
---
source### impl Clone for StartDiscovererResponse
source#### fn clone(&self) -> StartDiscovererResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for StartDiscovererResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for StartDiscovererResponse
source#### fn default() -> StartDiscovererResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for StartDiscovererResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<StartDiscovererResponse> for StartDiscovererResponse
source#### fn eq(&self, other: &StartDiscovererResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StartDiscovererResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for StartDiscovererResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for StartDiscovererResponse
### impl Send for StartDiscovererResponse
### impl Sync for StartDiscovererResponse
### impl Unpin for StartDiscovererResponse
### impl UnwindSafe for StartDiscovererResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::StopDiscovererRequest
===
```
pub struct StopDiscovererRequest {
pub discoverer_id: String,
}
```
Fields
---
`discoverer_id: String`The ID of the discoverer.
Trait Implementations
---
source### impl Clone for StopDiscovererRequest
source#### fn clone(&self) -> StopDiscovererRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for StopDiscovererRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for StopDiscovererRequest
source#### fn default() -> StopDiscovererRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<StopDiscovererRequest> for StopDiscovererRequest
source#### fn eq(&self, other: &StopDiscovererRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StopDiscovererRequest) -> bool
This method tests for `!=`.
source### impl Serialize for StopDiscovererRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for StopDiscovererRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for StopDiscovererRequest
### impl Send for StopDiscovererRequest
### impl Sync for StopDiscovererRequest
### impl Unpin for StopDiscovererRequest
### impl UnwindSafe for StopDiscovererRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::StopDiscovererResponse
===
```
pub struct StopDiscovererResponse {
pub discoverer_id: Option<String>,
pub state: Option<String>,
}
```
Fields
---
`discoverer_id: Option<String>`The ID of the discoverer.
`state: Option<String>`The state of the discoverer.
Trait Implementations
---
source### impl Clone for StopDiscovererResponse
source#### fn clone(&self) -> StopDiscovererResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for StopDiscovererResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for StopDiscovererResponse
source#### fn default() -> StopDiscovererResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for StopDiscovererResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<StopDiscovererResponse> for StopDiscovererResponse
source#### fn eq(&self, other: &StopDiscovererResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StopDiscovererResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for StopDiscovererResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for StopDiscovererResponse
### impl Send for StopDiscovererResponse
### impl Sync for StopDiscovererResponse
### impl Unpin for StopDiscovererResponse
### impl UnwindSafe for StopDiscovererResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::TagResourceRequest
===
```
pub struct TagResourceRequest {
pub resource_arn: String,
pub tags: HashMap<String, String>,
}
```
Fields
---
`resource_arn: String`The ARN of the resource.
`tags: HashMap<String, String>`Tags associated with the resource.
Trait Implementations
---
source### impl Clone for TagResourceRequest
source#### fn clone(&self) -> TagResourceRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TagResourceRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TagResourceRequest
source#### fn default() -> TagResourceRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<TagResourceRequest> for TagResourceRequest
source#### fn eq(&self, other: &TagResourceRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TagResourceRequest) -> bool
This method tests for `!=`.
source### impl Serialize for TagResourceRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for TagResourceRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for TagResourceRequest
### impl Send for TagResourceRequest
### impl Sync for TagResourceRequest
### impl Unpin for TagResourceRequest
### impl UnwindSafe for TagResourceRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::UntagResourceRequest
===
```
pub struct UntagResourceRequest {
pub resource_arn: String,
pub tag_keys: Vec<String>,
}
```
Fields
---
`resource_arn: String`The ARN of the resource.
`tag_keys: Vec<String>`Keys of key-value pairs.
Trait Implementations
---
source### impl Clone for UntagResourceRequest
source#### fn clone(&self) -> UntagResourceRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UntagResourceRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UntagResourceRequest
source#### fn default() -> UntagResourceRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<UntagResourceRequest> for UntagResourceRequest
source#### fn eq(&self, other: &UntagResourceRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UntagResourceRequest) -> bool
This method tests for `!=`.
source### impl Serialize for UntagResourceRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UntagResourceRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for UntagResourceRequest
### impl Send for UntagResourceRequest
### impl Sync for UntagResourceRequest
### impl Unpin for UntagResourceRequest
### impl UnwindSafe for UntagResourceRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::UpdateDiscovererRequest
===
```
pub struct UpdateDiscovererRequest {
pub description: Option<String>,
pub discoverer_id: String,
}
```
Fields
---
`description: Option<String>`The description of the discoverer to update.
`discoverer_id: String`The ID of the discoverer.
Trait Implementations
---
source### impl Clone for UpdateDiscovererRequest
source#### fn clone(&self) -> UpdateDiscovererRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateDiscovererRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateDiscovererRequest
source#### fn default() -> UpdateDiscovererRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateDiscovererRequest> for UpdateDiscovererRequest
source#### fn eq(&self, other: &UpdateDiscovererRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateDiscovererRequest) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateDiscovererRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateDiscovererRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateDiscovererRequest
### impl Send for UpdateDiscovererRequest
### impl Sync for UpdateDiscovererRequest
### impl Unpin for UpdateDiscovererRequest
### impl UnwindSafe for UpdateDiscovererRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::UpdateDiscovererResponse
===
```
pub struct UpdateDiscovererResponse {
pub description: Option<String>,
pub discoverer_arn: Option<String>,
pub discoverer_id: Option<String>,
pub source_arn: Option<String>,
pub state: Option<String>,
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`description: Option<String>`The description of the discoverer.
`discoverer_arn: Option<String>`The ARN of the discoverer.
`discoverer_id: Option<String>`The ID of the discoverer.
`source_arn: Option<String>`The ARN of the event bus.
`state: Option<String>`The state of the discoverer.
`tags: Option<HashMap<String, String>>`Tags associated with the resource.
Trait Implementations
---
source### impl Clone for UpdateDiscovererResponse
source#### fn clone(&self) -> UpdateDiscovererResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateDiscovererResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateDiscovererResponse
source#### fn default() -> UpdateDiscovererResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateDiscovererResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateDiscovererResponse> for UpdateDiscovererResponse
source#### fn eq(&self, other: &UpdateDiscovererResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateDiscovererResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateDiscovererResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateDiscovererResponse
### impl Send for UpdateDiscovererResponse
### impl Sync for UpdateDiscovererResponse
### impl Unpin for UpdateDiscovererResponse
### impl UnwindSafe for UpdateDiscovererResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::UpdateRegistryRequest
===
```
pub struct UpdateRegistryRequest {
pub description: Option<String>,
pub registry_name: String,
}
```
Updates the registry.
Fields
---
`description: Option<String>`The description of the registry to update.
`registry_name: String`The name of the registry.
Trait Implementations
---
source### impl Clone for UpdateRegistryRequest
source#### fn clone(&self) -> UpdateRegistryRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateRegistryRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateRegistryRequest
source#### fn default() -> UpdateRegistryRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateRegistryRequest> for UpdateRegistryRequest
source#### fn eq(&self, other: &UpdateRegistryRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateRegistryRequest) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateRegistryRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateRegistryRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateRegistryRequest
### impl Send for UpdateRegistryRequest
### impl Sync for UpdateRegistryRequest
### impl Unpin for UpdateRegistryRequest
### impl UnwindSafe for UpdateRegistryRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::UpdateRegistryResponse
===
```
pub struct UpdateRegistryResponse {
pub description: Option<String>,
pub registry_arn: Option<String>,
pub registry_name: Option<String>,
pub tags: Option<HashMap<String, String>>,
}
```
Fields
---
`description: Option<String>`The description of the registry.
`registry_arn: Option<String>`The ARN of the registry.
`registry_name: Option<String>`The name of the registry.
`tags: Option<HashMap<String, String>>`Tags associated with the registry.
Trait Implementations
---
source### impl Clone for UpdateRegistryResponse
source#### fn clone(&self) -> UpdateRegistryResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateRegistryResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateRegistryResponse
source#### fn default() -> UpdateRegistryResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateRegistryResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateRegistryResponse> for UpdateRegistryResponse
source#### fn eq(&self, other: &UpdateRegistryResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateRegistryResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateRegistryResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateRegistryResponse
### impl Send for UpdateRegistryResponse
### impl Sync for UpdateRegistryResponse
### impl Unpin for UpdateRegistryResponse
### impl UnwindSafe for UpdateRegistryResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct rusoto_schemas::UpdateSchemaRequest
===
```
pub struct UpdateSchemaRequest {
pub client_token_id: Option<String>,
pub content: Option<String>,
pub description: Option<String>,
pub registry_name: String,
pub schema_name: String,
pub type_: Option<String>,
}
```
Fields
---
`client_token_id: Option<String>`The ID of the client token.
`content: Option<String>`The source of the schema definition.
`description: Option<String>`The description of the schema.
`registry_name: String`The name of the registry.
`schema_name: String`The name of the schema.
`type_: Option<String>`The schema type for the events schema.
Trait Implementations
---
source### impl Clone for UpdateSchemaRequest
source#### fn clone(&self) -> UpdateSchemaRequest
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateSchemaRequest
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateSchemaRequest
source#### fn default() -> UpdateSchemaRequest
Returns the “default value” for a type. Read more
source### impl PartialEq<UpdateSchemaRequest> for UpdateSchemaRequest
source#### fn eq(&self, other: &UpdateSchemaRequest) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateSchemaRequest) -> bool
This method tests for `!=`.
source### impl Serialize for UpdateSchemaRequest
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl StructuralPartialEq for UpdateSchemaRequest
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateSchemaRequest
### impl Send for UpdateSchemaRequest
### impl Sync for UpdateSchemaRequest
### impl Unpin for UpdateSchemaRequest
### impl UnwindSafe for UpdateSchemaRequest
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_schemas::UpdateSchemaResponse
===
```
pub struct UpdateSchemaResponse {
pub description: Option<String>,
pub last_modified: Option<f64>,
pub schema_arn: Option<String>,
pub schema_name: Option<String>,
pub schema_version: Option<String>,
pub tags: Option<HashMap<String, String>>,
pub type_: Option<String>,
pub version_created_date: Option<f64>,
}
```
Fields
---
`description: Option<String>`The description of the schema.
`last_modified: Option<f64>`The date and time that schema was modified.
`schema_arn: Option<String>`The ARN of the schema.
`schema_name: Option<String>`The name of the schema.
`schema_version: Option<String>`The version number of the schema
`tags: Option<HashMap<String, String>>``type_: Option<String>`The type of the schema.
`version_created_date: Option<f64>`The date the schema version was created.
Trait Implementations
---
source### impl Clone for UpdateSchemaResponse
source#### fn clone(&self) -> UpdateSchemaResponse
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpdateSchemaResponse
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpdateSchemaResponse
source#### fn default() -> UpdateSchemaResponse
Returns the “default value” for a type. Read more
source### impl<'de> Deserialize<'de> for UpdateSchemaResponse
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl PartialEq<UpdateSchemaResponse> for UpdateSchemaResponse
source#### fn eq(&self, other: &UpdateSchemaResponse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateSchemaResponse) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateSchemaResponse
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateSchemaResponse
### impl Send for UpdateSchemaResponse
### impl Sync for UpdateSchemaResponse
### impl Unpin for UpdateSchemaResponse
### impl UnwindSafe for UpdateSchemaResponse
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Enum rusoto_schemas::CreateDiscovererError
===
```
pub enum CreateDiscovererError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by CreateDiscoverer
Variants
---
### `BadRequest(String)`
### `Conflict(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl CreateDiscovererError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateDiscovererErrorTrait Implementations
---
source### impl Debug for CreateDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateDiscovererError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateDiscovererError> for CreateDiscovererError
source#### fn eq(&self, other: &CreateDiscovererError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDiscovererError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDiscovererError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDiscovererError
### impl Send for CreateDiscovererError
### impl Sync for CreateDiscovererError
### impl Unpin for CreateDiscovererError
### impl UnwindSafe for CreateDiscovererError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::CreateRegistryError
===
```
pub enum CreateRegistryError {
BadRequest(String),
Conflict(String),
Forbidden(String),
InternalServerError(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by CreateRegistry
Variants
---
### `BadRequest(String)`
### `Conflict(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl CreateRegistryError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateRegistryErrorTrait Implementations
---
source### impl Debug for CreateRegistryError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateRegistryError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateRegistryError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateRegistryError> for CreateRegistryError
source#### fn eq(&self, other: &CreateRegistryError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateRegistryError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateRegistryError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateRegistryError
### impl Send for CreateRegistryError
### impl Sync for CreateRegistryError
### impl Unpin for CreateRegistryError
### impl UnwindSafe for CreateRegistryError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::CreateSchemaError
===
```
pub enum CreateSchemaError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
ServiceUnavailable(String),
}
```
Errors returned by CreateSchema
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `ServiceUnavailable(String)`
Implementations
---
source### impl CreateSchemaError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateSchemaErrorTrait Implementations
---
source### impl Debug for CreateSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateSchemaError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateSchemaError> for CreateSchemaError
source#### fn eq(&self, other: &CreateSchemaError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateSchemaError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateSchemaError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateSchemaError
### impl Send for CreateSchemaError
### impl Sync for CreateSchemaError
### impl Unpin for CreateSchemaError
### impl UnwindSafe for CreateSchemaError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::DeleteDiscovererError
===
```
pub enum DeleteDiscovererError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by DeleteDiscoverer
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl DeleteDiscovererError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteDiscovererErrorTrait Implementations
---
source### impl Debug for DeleteDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteDiscovererError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteDiscovererError> for DeleteDiscovererError
source#### fn eq(&self, other: &DeleteDiscovererError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDiscovererError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDiscovererError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDiscovererError
### impl Send for DeleteDiscovererError
### impl Sync for DeleteDiscovererError
### impl Unpin for DeleteDiscovererError
### impl UnwindSafe for DeleteDiscovererError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::DeleteRegistryError
===
```
pub enum DeleteRegistryError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by DeleteRegistry
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl DeleteRegistryError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteRegistryErrorTrait Implementations
---
source### impl Debug for DeleteRegistryError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteRegistryError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteRegistryError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteRegistryError> for DeleteRegistryError
source#### fn eq(&self, other: &DeleteRegistryError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteRegistryError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteRegistryError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteRegistryError
### impl Send for DeleteRegistryError
### impl Sync for DeleteRegistryError
### impl Unpin for DeleteRegistryError
### impl UnwindSafe for DeleteRegistryError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::DeleteResourcePolicyError
===
```
pub enum DeleteResourcePolicyError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by DeleteResourcePolicy
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl DeleteResourcePolicyError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteResourcePolicyErrorTrait Implementations
---
source### impl Debug for DeleteResourcePolicyError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteResourcePolicyError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteResourcePolicyError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteResourcePolicyError> for DeleteResourcePolicyError
source#### fn eq(&self, other: &DeleteResourcePolicyError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteResourcePolicyError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteResourcePolicyError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteResourcePolicyError
### impl Send for DeleteResourcePolicyError
### impl Sync for DeleteResourcePolicyError
### impl Unpin for DeleteResourcePolicyError
### impl UnwindSafe for DeleteResourcePolicyError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::DeleteSchemaError
===
```
pub enum DeleteSchemaError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by DeleteSchema
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl DeleteSchemaError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteSchemaErrorTrait Implementations
---
source### impl Debug for DeleteSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteSchemaError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteSchemaError> for DeleteSchemaError
source#### fn eq(&self, other: &DeleteSchemaError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteSchemaError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteSchemaError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteSchemaError
### impl Send for DeleteSchemaError
### impl Sync for DeleteSchemaError
### impl Unpin for DeleteSchemaError
### impl UnwindSafe for DeleteSchemaError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::DeleteSchemaVersionError
===
```
pub enum DeleteSchemaVersionError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by DeleteSchemaVersion
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl DeleteSchemaVersionError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteSchemaVersionErrorTrait Implementations
---
source### impl Debug for DeleteSchemaVersionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteSchemaVersionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteSchemaVersionError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteSchemaVersionError> for DeleteSchemaVersionError
source#### fn eq(&self, other: &DeleteSchemaVersionError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteSchemaVersionError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteSchemaVersionError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteSchemaVersionError
### impl Send for DeleteSchemaVersionError
### impl Sync for DeleteSchemaVersionError
### impl Unpin for DeleteSchemaVersionError
### impl UnwindSafe for DeleteSchemaVersionError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::DescribeCodeBindingError
===
```
pub enum DescribeCodeBindingError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
TooManyRequests(String),
Unauthorized(String),
}
```
Errors returned by DescribeCodeBinding
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `TooManyRequests(String)`
### `Unauthorized(String)`
Implementations
---
source### impl DescribeCodeBindingError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeCodeBindingErrorTrait Implementations
---
source### impl Debug for DescribeCodeBindingError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeCodeBindingError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeCodeBindingError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeCodeBindingError> for DescribeCodeBindingError
source#### fn eq(&self, other: &DescribeCodeBindingError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeCodeBindingError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeCodeBindingError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeCodeBindingError
### impl Send for DescribeCodeBindingError
### impl Sync for DescribeCodeBindingError
### impl Unpin for DescribeCodeBindingError
### impl UnwindSafe for DescribeCodeBindingError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::DescribeDiscovererError
===
```
pub enum DescribeDiscovererError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by DescribeDiscoverer
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl DescribeDiscovererError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeDiscovererErrorTrait Implementations
---
source### impl Debug for DescribeDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeDiscovererError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeDiscovererError> for DescribeDiscovererError
source#### fn eq(&self, other: &DescribeDiscovererError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDiscovererError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDiscovererError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDiscovererError
### impl Send for DescribeDiscovererError
### impl Sync for DescribeDiscovererError
### impl Unpin for DescribeDiscovererError
### impl UnwindSafe for DescribeDiscovererError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::DescribeRegistryError
===
```
pub enum DescribeRegistryError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by DescribeRegistry
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl DescribeRegistryError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeRegistryErrorTrait Implementations
---
source### impl Debug for DescribeRegistryError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeRegistryError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeRegistryError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeRegistryError> for DescribeRegistryError
source#### fn eq(&self, other: &DescribeRegistryError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeRegistryError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeRegistryError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeRegistryError
### impl Send for DescribeRegistryError
### impl Sync for DescribeRegistryError
### impl Unpin for DescribeRegistryError
### impl UnwindSafe for DescribeRegistryError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::DescribeSchemaError
===
```
pub enum DescribeSchemaError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by DescribeSchema
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl DescribeSchemaError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeSchemaErrorTrait Implementations
---
source### impl Debug for DescribeSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeSchemaError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeSchemaError> for DescribeSchemaError
source#### fn eq(&self, other: &DescribeSchemaError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeSchemaError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeSchemaError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeSchemaError
### impl Send for DescribeSchemaError
### impl Sync for DescribeSchemaError
### impl Unpin for DescribeSchemaError
### impl UnwindSafe for DescribeSchemaError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::ExportSchemaError
===
```
pub enum ExportSchemaError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
TooManyRequests(String),
Unauthorized(String),
}
```
Errors returned by ExportSchema
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `TooManyRequests(String)`
### `Unauthorized(String)`
Implementations
---
source### impl ExportSchemaError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ExportSchemaErrorTrait Implementations
---
source### impl Debug for ExportSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ExportSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ExportSchemaError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ExportSchemaError> for ExportSchemaError
source#### fn eq(&self, other: &ExportSchemaError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ExportSchemaError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ExportSchemaError
Auto Trait Implementations
---
### impl RefUnwindSafe for ExportSchemaError
### impl Send for ExportSchemaError
### impl Sync for ExportSchemaError
### impl Unpin for ExportSchemaError
### impl UnwindSafe for ExportSchemaError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::GetCodeBindingSourceError
===
```
pub enum GetCodeBindingSourceError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
TooManyRequests(String),
Unauthorized(String),
}
```
Errors returned by GetCodeBindingSource
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `TooManyRequests(String)`
### `Unauthorized(String)`
Implementations
---
source### impl GetCodeBindingSourceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<GetCodeBindingSourceErrorTrait Implementations
---
source### impl Debug for GetCodeBindingSourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for GetCodeBindingSourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for GetCodeBindingSourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<GetCodeBindingSourceError> for GetCodeBindingSourceError
source#### fn eq(&self, other: &GetCodeBindingSourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetCodeBindingSourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetCodeBindingSourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for GetCodeBindingSourceError
### impl Send for GetCodeBindingSourceError
### impl Sync for GetCodeBindingSourceError
### impl Unpin for GetCodeBindingSourceError
### impl UnwindSafe for GetCodeBindingSourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::GetDiscoveredSchemaError
===
```
pub enum GetDiscoveredSchemaError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by GetDiscoveredSchema
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl GetDiscoveredSchemaError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<GetDiscoveredSchemaErrorTrait Implementations
---
source### impl Debug for GetDiscoveredSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for GetDiscoveredSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for GetDiscoveredSchemaError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<GetDiscoveredSchemaError> for GetDiscoveredSchemaError
source#### fn eq(&self, other: &GetDiscoveredSchemaError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetDiscoveredSchemaError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetDiscoveredSchemaError
Auto Trait Implementations
---
### impl RefUnwindSafe for GetDiscoveredSchemaError
### impl Send for GetDiscoveredSchemaError
### impl Sync for GetDiscoveredSchemaError
### impl Unpin for GetDiscoveredSchemaError
### impl UnwindSafe for GetDiscoveredSchemaError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::GetResourcePolicyError
===
```
pub enum GetResourcePolicyError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by GetResourcePolicy
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl GetResourcePolicyError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<GetResourcePolicyErrorTrait Implementations
---
source### impl Debug for GetResourcePolicyError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for GetResourcePolicyError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for GetResourcePolicyError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<GetResourcePolicyError> for GetResourcePolicyError
source#### fn eq(&self, other: &GetResourcePolicyError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GetResourcePolicyError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GetResourcePolicyError
Auto Trait Implementations
---
### impl RefUnwindSafe for GetResourcePolicyError
### impl Send for GetResourcePolicyError
### impl Sync for GetResourcePolicyError
### impl Unpin for GetResourcePolicyError
### impl UnwindSafe for GetResourcePolicyError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::ListDiscoverersError
===
```
pub enum ListDiscoverersError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by ListDiscoverers
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl ListDiscoverersError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListDiscoverersErrorTrait Implementations
---
source### impl Debug for ListDiscoverersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListDiscoverersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListDiscoverersError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListDiscoverersError> for ListDiscoverersError
source#### fn eq(&self, other: &ListDiscoverersError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListDiscoverersError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListDiscoverersError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListDiscoverersError
### impl Send for ListDiscoverersError
### impl Sync for ListDiscoverersError
### impl Unpin for ListDiscoverersError
### impl UnwindSafe for ListDiscoverersError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::ListRegistriesError
===
```
pub enum ListRegistriesError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by ListRegistries
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl ListRegistriesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListRegistriesErrorTrait Implementations
---
source### impl Debug for ListRegistriesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListRegistriesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListRegistriesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListRegistriesError> for ListRegistriesError
source#### fn eq(&self, other: &ListRegistriesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListRegistriesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListRegistriesError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListRegistriesError
### impl Send for ListRegistriesError
### impl Sync for ListRegistriesError
### impl Unpin for ListRegistriesError
### impl UnwindSafe for ListRegistriesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::ListSchemaVersionsError
===
```
pub enum ListSchemaVersionsError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by ListSchemaVersions
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl ListSchemaVersionsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListSchemaVersionsErrorTrait Implementations
---
source### impl Debug for ListSchemaVersionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListSchemaVersionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListSchemaVersionsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListSchemaVersionsError> for ListSchemaVersionsError
source#### fn eq(&self, other: &ListSchemaVersionsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListSchemaVersionsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListSchemaVersionsError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListSchemaVersionsError
### impl Send for ListSchemaVersionsError
### impl Sync for ListSchemaVersionsError
### impl Unpin for ListSchemaVersionsError
### impl UnwindSafe for ListSchemaVersionsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::ListSchemasError
===
```
pub enum ListSchemasError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by ListSchemas
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl ListSchemasError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<ListSchemasErrorTrait Implementations
---
source### impl Debug for ListSchemasError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListSchemasError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListSchemasError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListSchemasError> for ListSchemasError
source#### fn eq(&self, other: &ListSchemasError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListSchemasError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListSchemasError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListSchemasError
### impl Send for ListSchemasError
### impl Sync for ListSchemasError
### impl Unpin for ListSchemasError
### impl UnwindSafe for ListSchemasError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::ListTagsForResourceError
===
```
pub enum ListTagsForResourceError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
}
```
Errors returned by ListTagsForResource
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
Implementations
---
source### impl ListTagsForResourceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListTagsForResourceErrorTrait Implementations
---
source### impl Debug for ListTagsForResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListTagsForResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListTagsForResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListTagsForResourceError> for ListTagsForResourceError
source#### fn eq(&self, other: &ListTagsForResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListTagsForResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListTagsForResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListTagsForResourceError
### impl Send for ListTagsForResourceError
### impl Sync for ListTagsForResourceError
### impl Unpin for ListTagsForResourceError
### impl UnwindSafe for ListTagsForResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::PutCodeBindingError
===
```
pub enum PutCodeBindingError {
BadRequest(String),
Forbidden(String),
Gone(String),
InternalServerError(String),
NotFound(String),
TooManyRequests(String),
Unauthorized(String),
}
```
Errors returned by PutCodeBinding
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `Gone(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `TooManyRequests(String)`
### `Unauthorized(String)`
Implementations
---
source### impl PutCodeBindingError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<PutCodeBindingErrorTrait Implementations
---
source### impl Debug for PutCodeBindingError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for PutCodeBindingError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for PutCodeBindingError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<PutCodeBindingError> for PutCodeBindingError
source#### fn eq(&self, other: &PutCodeBindingError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PutCodeBindingError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for PutCodeBindingError
Auto Trait Implementations
---
### impl RefUnwindSafe for PutCodeBindingError
### impl Send for PutCodeBindingError
### impl Sync for PutCodeBindingError
### impl Unpin for PutCodeBindingError
### impl UnwindSafe for PutCodeBindingError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::PutResourcePolicyError
===
```
pub enum PutResourcePolicyError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
PreconditionFailed(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by PutResourcePolicy
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `PreconditionFailed(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl PutResourcePolicyError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<PutResourcePolicyErrorTrait Implementations
---
source### impl Debug for PutResourcePolicyError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for PutResourcePolicyError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for PutResourcePolicyError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<PutResourcePolicyError> for PutResourcePolicyError
source#### fn eq(&self, other: &PutResourcePolicyError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PutResourcePolicyError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for PutResourcePolicyError
Auto Trait Implementations
---
### impl RefUnwindSafe for PutResourcePolicyError
### impl Send for PutResourcePolicyError
### impl Sync for PutResourcePolicyError
### impl Unpin for PutResourcePolicyError
### impl UnwindSafe for PutResourcePolicyError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::SearchSchemasError
===
```
pub enum SearchSchemasError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by SearchSchemas
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl SearchSchemasError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<SearchSchemasErrorTrait Implementations
---
source### impl Debug for SearchSchemasError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for SearchSchemasError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for SearchSchemasError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<SearchSchemasError> for SearchSchemasError
source#### fn eq(&self, other: &SearchSchemasError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &SearchSchemasError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for SearchSchemasError
Auto Trait Implementations
---
### impl RefUnwindSafe for SearchSchemasError
### impl Send for SearchSchemasError
### impl Sync for SearchSchemasError
### impl Unpin for SearchSchemasError
### impl UnwindSafe for SearchSchemasError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::StartDiscovererError
===
```
pub enum StartDiscovererError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by StartDiscoverer
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl StartDiscovererError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<StartDiscovererErrorTrait Implementations
---
source### impl Debug for StartDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for StartDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for StartDiscovererError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<StartDiscovererError> for StartDiscovererError
source#### fn eq(&self, other: &StartDiscovererError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StartDiscovererError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for StartDiscovererError
Auto Trait Implementations
---
### impl RefUnwindSafe for StartDiscovererError
### impl Send for StartDiscovererError
### impl Sync for StartDiscovererError
### impl Unpin for StartDiscovererError
### impl UnwindSafe for StartDiscovererError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::StopDiscovererError
===
```
pub enum StopDiscovererError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by StopDiscoverer
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl StopDiscovererError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<StopDiscovererErrorTrait Implementations
---
source### impl Debug for StopDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for StopDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for StopDiscovererError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<StopDiscovererError> for StopDiscovererError
source#### fn eq(&self, other: &StopDiscovererError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StopDiscovererError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for StopDiscovererError
Auto Trait Implementations
---
### impl RefUnwindSafe for StopDiscovererError
### impl Send for StopDiscovererError
### impl Sync for StopDiscovererError
### impl Unpin for StopDiscovererError
### impl UnwindSafe for StopDiscovererError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::TagResourceError
===
```
pub enum TagResourceError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
}
```
Errors returned by TagResource
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
Implementations
---
source### impl TagResourceError
source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<TagResourceErrorTrait Implementations
---
source### impl Debug for TagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for TagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for TagResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<TagResourceError> for TagResourceError
source#### fn eq(&self, other: &TagResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TagResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for TagResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for TagResourceError
### impl Send for TagResourceError
### impl Sync for TagResourceError
### impl Unpin for TagResourceError
### impl UnwindSafe for TagResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::UntagResourceError
===
```
pub enum UntagResourceError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
}
```
Errors returned by UntagResource
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
Implementations
---
source### impl UntagResourceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UntagResourceErrorTrait Implementations
---
source### impl Debug for UntagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UntagResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UntagResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UntagResourceError> for UntagResourceError
source#### fn eq(&self, other: &UntagResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UntagResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UntagResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for UntagResourceError
### impl Send for UntagResourceError
### impl Sync for UntagResourceError
### impl Unpin for UntagResourceError
### impl UnwindSafe for UntagResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::UpdateDiscovererError
===
```
pub enum UpdateDiscovererError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by UpdateDiscoverer
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl UpdateDiscovererError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UpdateDiscovererErrorTrait Implementations
---
source### impl Debug for UpdateDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateDiscovererError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateDiscovererError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateDiscovererError> for UpdateDiscovererError
source#### fn eq(&self, other: &UpdateDiscovererError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateDiscovererError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateDiscovererError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateDiscovererError
### impl Send for UpdateDiscovererError
### impl Sync for UpdateDiscovererError
### impl Unpin for UpdateDiscovererError
### impl UnwindSafe for UpdateDiscovererError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::UpdateRegistryError
===
```
pub enum UpdateRegistryError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
Unauthorized(String),
}
```
Errors returned by UpdateRegistry
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
### `Unauthorized(String)`
Implementations
---
source### impl UpdateRegistryError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UpdateRegistryErrorTrait Implementations
---
source### impl Debug for UpdateRegistryError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateRegistryError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateRegistryError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateRegistryError> for UpdateRegistryError
source#### fn eq(&self, other: &UpdateRegistryError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateRegistryError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateRegistryError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateRegistryError
### impl Send for UpdateRegistryError
### impl Sync for UpdateRegistryError
### impl Unpin for UpdateRegistryError
### impl UnwindSafe for UpdateRegistryError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_schemas::UpdateSchemaError
===
```
pub enum UpdateSchemaError {
BadRequest(String),
Forbidden(String),
InternalServerError(String),
NotFound(String),
ServiceUnavailable(String),
}
```
Errors returned by UpdateSchema
Variants
---
### `BadRequest(String)`
### `Forbidden(String)`
### `InternalServerError(String)`
### `NotFound(String)`
### `ServiceUnavailable(String)`
Implementations
---
source### impl UpdateSchemaError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<UpdateSchemaErrorTrait Implementations
---
source### impl Debug for UpdateSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for UpdateSchemaError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for UpdateSchemaError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<UpdateSchemaError> for UpdateSchemaError
source#### fn eq(&self, other: &UpdateSchemaError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpdateSchemaError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpdateSchemaError
Auto Trait Implementations
---
### impl RefUnwindSafe for UpdateSchemaError
### impl Send for UpdateSchemaError
### impl Sync for UpdateSchemaError
### impl Unpin for UpdateSchemaError
### impl UnwindSafe for UpdateSchemaError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more |
transform | cran | R | Package ‘transform.hazards’
June 6, 2023
Type Package
Title Transforms Cumulative Hazards to Parameter Specified by ODE
System
Version 0.1.1
Description Targets parameters that solve Ordinary Differential
Equations (ODEs) driven by a vector of cumulative hazard functions.
The package provides a method for estimating these parameters using
an estimator defined by a corresponding Stochastic Differential Equation
(SDE) system driven by cumulative hazard estimates. By providing cumulative
hazard estimates as input, the package gives estimates of the parameter as
output, along with pointwise (co)variances derived from an asymptotic
expression. Examples of parameters that can be targeted in this way include
the survival function, the restricted mean survival function, cumulative
incidence functions, among others; see Ryalen, Stensrud, and Røysland (2018)
<doi:10.1093/biomet/asy035>, and further applications in
Stensrud, Røysland, and Ryalen (2019) <doi:10.1111/biom.13102>
and Ryalen et al. (2021) <doi:10.1093/biostatistics/kxab009>.
License GPL (>= 3)
Suggests timereg, knitr, rmarkdown
RoxygenNote 7.2.1
Encoding UTF-8
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre]
(<https://orcid.org/0000-0002-3236-6782>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-06-06 12:10:02 UTC
R topics documented:
pluginEstimat... 2
pluginEstimate SDE plugin estimator solver
Description
Calculates recursive estimator for given hazard estimates, integrand function and gradients.
Usage
pluginEstimate(n, hazMatrix, F_fun, JacobianList, X0, V0, isLebesgue = NULL)
Arguments
n Total number of indiivduals
hazMatrix Matrix consisting of hazards(rows) and their increments(columns) along the
same time scale
F_fun Integrand function F = (F_1,F_2,...) for the differential equation system
JacobianList The Jacobian matrices of F_1, F_2, ... organized in a list
X0 Matrix containing initial values for the parameter
V0 Matrix containing initial values for the variance
isLebesgue (Optional, to improve efficientcy) Provide the index of A (e.g. 2) that is a regular
dt integral, i.e. not a cumulative hazard.
Value
list containing the parameter estimate X, and its covariance estimates V/n
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., <NAME>., R<NAME>.: Transforming cumulative hazards, arXiv, to appear in
Biometrika 2018.
Examples
###############################################################
######################### Survival ##########################
###############################################################
library(timereg)
n <- 100
dfr <- data.frame(to = rexp(n,1),from=0,event=1)
fit <- survfit(Surv(from,to,event==1)~1,data=dfr)
times <- fit$time
dN <- fit$n.event
Y <- fit$n.risk
Y[1] <- n
dA <- matrix(dN/Y,nrow=1,ncol=length(dN))
# Function specification
F_fun_Survival <- function(x)-matrix(x,1,1)
JacobianListSurvival <- list(function(x)-matrix(1,1,1))
X0_Survival <- matrix(1,1,1)
V0_Survival <- matrix(0,1,1)
paramEst_survival <- pluginEstimate(
n,dA,F_fun_Survival,JacobianListSurvival,X0_Survival,V0_Survival
)
KM <- cumprod(1 - dA)
Greenwood <- KM^2 * cumsum(dA^2)
plot(
times,paramEst_survival$X,type="s",main="SDE plugin survival estimates",
ylab="",xlab="time"
)
lines(times,paramEst_survival$X + 1.96*sqrt(paramEst_survival$covariance[1,1,]),type="s")
lines(times,paramEst_survival$X - 1.96*sqrt(paramEst_survival$covariance[1,1,]),type="s")
lines(seq(0,10,length.out=100),exp(-seq(0,10,length.out=100)),col=2)
legend("topright",c("SDE plugin estimates","Exact"),lty=1,col=c(1,2),bty="n")
plot(
times,paramEst_survival$covariance,type="s",
main="SDE plugin variance vs Greenwood variance",ylab="",xlab="time"
)
lines(times,Greenwood,type="s",col=4,lty=1)
legend("topright",c("SDE plugin estimates","Greenwood"),lty=1,col=c(1,4),bty="n")
#################################################################################
############ Competing risks and Cumulative incidence(two states) ###############
#################################################################################
n <- 200
x1 <- rexp(n,1)
x2 <- rexp(n,1.3)
to.states <- ifelse(x1<x2,0,1)
dfr <- data.frame(from=0,to=c(x1[to.states==0],x2[to.states==1]),to.state=to.states)
dfr <- dfr[order(dfr$to),]
nrisk <- c(n,n:1)
dA1 <- c(0,1*(dfr$to.state==0))/nrisk
dA2 <- c(0,1*(dfr$to.state==1))/nrisk
hazMatrix <- rbind(dA1,dA2)
F_fun_cuminc <- function(X)rbind(c(X[2],0),c(-X[2],-X[2]))
JacobianList_cuminc <- list( function(X)matrix(c(0,0,1,-1),nrow=2),
function(X)matrix(c(0,0,0,-1),nrow=2) )
X0_cuminc <- matrix(c(0,1),nrow=2,ncol=1)
V0_cuminc <- matrix(0,nrow=2,ncol=2)
paramEst_cuminc <- pluginEstimate(n,hazMatrix,F_fun_cuminc,JacobianList_cuminc,X0_cuminc,V0_cuminc)
times <- c(0,dfr$to)
plot(
times,paramEst_cuminc$X[1,],type="s",ylab="",xlab="time",
main="SDE plugin cumulative incidence estimate",ylim=c(0,0.7)
)
lines(times,paramEst_cuminc$X[1,] + 1.96*sqrt(paramEst_cuminc$covariance[1,1,]),type="s")
lines(times,paramEst_cuminc$X[1,] - 1.96*sqrt(paramEst_cuminc$covariance[1,1,]),type="s")
lines(seq(0,10,length.out = 1000),10/1000*cumsum(exp(-seq(0,10,length.out = 1000)*(1 + 1.3))),col=2)
legend("topright",c("SDE plugin estimates","Exact"),lty=1,col=c(1,2),bty="n")
##########################################################################################
#################### Relative survival(two different populations) ######################
##########################################################################################
n <- 300
t1 <- sort(rexp(n,1))
t2 <- sort(rexp(n,1.3))
times <- sort(c(0,t1,t2))
nrisk <- 300:1
dA1 <- 1*(t1 %in% times)/nrisk
dA2 <- 1*(t2 %in% times)/nrisk
tmatch1 <- match(t1,times)
tmatch2 <- match(t2,times)
hazMatrix <- matrix(0,nrow=2,ncol=length(times))
hazMatrix[1,tmatch1] <- dA1
hazMatrix[2,tmatch2] <- dA2
F_fun_RelSurv <- function(X)matrix(c(-X,X),ncol=2)
JacobianList_RelSurv <- list(function(X)matrix(-1,nrow=1,ncol=1),
function(X)matrix(1,nrow=1,ncol=1))
X0_RelSurv <- matrix(1,nrow=1,ncol=1)
V0_RelSurv <- matrix(0,nrow=1,ncol=1)
paramEst_relsurv <- pluginEstimate(
n,hazMatrix,F_fun_RelSurv,JacobianList_RelSurv,X0_RelSurv,V0_RelSurv
)
plot(
times,paramEst_relsurv$X[1,],type="s",ylab="",xlab="time",
main="SDE plugin relative survival estimate",ylim=c(-1,5.8),xlim=c(0,4)
)
lines(times,paramEst_relsurv$X[1,] + 1.96*sqrt(paramEst_relsurv$covariance[1,,]),type="s")
lines(times,paramEst_relsurv$X[1,] - 1.96*sqrt(paramEst_relsurv$covariance[1,,]),type="s")
lines(seq(0,10,length.out = 100),exp(seq(0,10,length.out = 100)*(1.3-1)),col=2)
legend("topleft",c("SDE plugin estimates","Exact"),lty=1,col=c(1,2),bty="n") |
poolex | hex | Erlang |
Poolex
===

[](https://hex.pm/packages/poolex)
[](https://hexdocs.pm/poolex/)
[](https://github.com/general-CbIC/poolex/blob/main/LICENSE)
[](https://hex.pm/packages/poolex)
Poolex is a library for managing pools of workers. Inspired by [poolboy](https://github.com/devinus/poolboy).
[Features](#features)
---
With `poolex` you can:
* Launch multiple pools of workers and then access the free ones from anywhere in the application.
* Configure the pool to run additional temporary workers if the load increases.
* Use your implementations to define worker and caller processes access logic.
Why `poolex` instead of `poolboy`?
- `poolex` is written in Elixir. This library is much more convenient to use in Elixir projects.
- `poolboy` is a great library, but not actively maintained :crying_cat_face:
[Requirements](#requirements)
---
| Requirement | Version |
| --- | --- |
| Erlang/OTP | >= 22 |
| Elixir | >= 1.7 |
[Table of Contents](#table-of-contents)
---
* [Installation](#installation)
* [Getting Started](https://hexdocs.pm/poolex/getting-started.html)
+ [Starting pool of workers](https://hexdocs.pm/poolex/getting-started.html#starting-pool-of-workers)
+ [Poolex configuration options](https://hexdocs.pm/poolex/getting-started.html#starting-pool-of-workers)
+ [Working with the pool](https://hexdocs.pm/poolex/getting-started.html#working-with-the-pool)
* [Migration from `:poolboy`](https://hexdocs.pm/poolex/migration-from-poolboy.html)
* [Example of use](https://hexdocs.pm/poolex/example-of-use.html)
+ [Defining the worker](https://hexdocs.pm/poolex/example-of-use.html#defining-the-worker)
+ [Configuring Poolex](https://hexdocs.pm/poolex/example-of-use.html#configuring-poolex)
+ [Using Poolex](https://hexdocs.pm/poolex/example-of-use.html#using-poolex)
* [Workers and callers implementations](https://hexdocs.pm/poolex/workers-and-callers-implementations.html)
+ [Callers](https://hexdocs.pm/poolex/workers-and-callers-implementations.html#callers)
+ [Workers](https://hexdocs.pm/poolex/workers-and-callers-implementations.html#workers)
+ [Writing custom implementations](https://hexdocs.pm/poolex/workers-and-callers-implementations.html#writing-custom-implementations)
* [Contributions](#contributions)
[Installation](#installation)
---
Add `:poolex` to your list of dependencies in `mix.exs`:
```
def deps do
[
{:poolex, "~> 0.7.0"}
]
end
```
[Usage](#usage)
---
In the most typical use of Poolex, you only need to start a pool of workers as a child of your application.
```
children = [
{Poolex,
pool_id: :worker_pool,
worker_module: SomeWorker,
workers_count: 5}
]
Supervisor.start_link(children, strategy: :one_for_one)
```
Then you can execute any code on the workers with `run/3`:
```
iex> Poolex.run(:worker_pool, &(is_pid?(&1)), checkout_timeout: 1_000)
{:ok, true}
```
A detailed description of the available configuration or examples of use can be found in [documentation](https://hexdocs.pm/poolex/getting-started.html).
[Contributions](#contributions)
---
If you feel something can be improved or have any questions about specific behaviors or pieces of implementation, please feel free to file an issue. Proposed changes should be taken to issues before any PRs to save time on code that might not be merged upstream.
If you are ready to change the project, please read the [Contributing guide](contributing.html) first.
[Contributing to Poolex](contributing.html)
[Next Page →
Workers and callers implementations](workers-and-callers-implementations.html)
Poolex
===
[Usage](#module-usage)
---
In the most typical use of Poolex, you only need to start pool of workers as a child of your application.
```
children = [
{Poolex,
pool_id: :worker_pool,
worker_module: SomeWorker,
workers_count: 5}
]
Supervisor.start_link(children, strategy: :one_for_one)
```
Then you can execute any code on the workers with [`run/3`](#run/3):
```
Poolex.run(:worker_pool, &(is_pid?(&1)), checkout_timeout: 1_000)
{:ok, true}
```
Fore more information see [Getting Started](https://hexdocs.pm/poolex/getting-started.html)
[Summary](#summary)
===
[Types](#types)
---
[pool_id()](#t:pool_id/0)
Any atom naming your pool, e.g. `:my_pool`.
[poolex_option()](#t:poolex_option/0)
| Option | Description | Example | Default value |
| --- | --- | --- | --- |
| `pool_id` | Identifier by which you will access the pool | `:my_pool` | **option is required** |
| `worker_module` | Name of module that implements our worker | `MyApp.Worker` | **option is required** |
| `workers_count` | How many workers should be running in the pool | `5` | **option is required** |
| `max_overflow` | How many workers can be created over the limit | `2` | `0` |
| `worker_args` | List of arguments passed to the start function | `[:gg, "wp"]` | `[]` |
| `worker_start_fun` | Name of the function that starts the worker | `:run` | `:start_link` |
| `busy_workers_impl` | Module that describes how to work with busy workers | `SomeBusyWorkersImpl` | [`Poolex.Workers.Impl.List`](Poolex.Workers.Impl.List.html) |
| `idle_workers_impl` | Module that describes how to work with idle workers | `SomeIdleWorkersImpl` | [`Poolex.Workers.Impl.List`](Poolex.Workers.Impl.List.html) |
| `waiting_callers_impl` | Module that describes how to work with callers queue | `WaitingCallersImpl` | [`Poolex.Callers.Impl.ErlangQueue`](Poolex.Callers.Impl.ErlangQueue.html) |
[run_option()](#t:run_option/0)
| Option | Description | Example | Default value |
| --- | --- | --- | --- |
| checkout_timeout | How long we can wait for a worker on the call site | `60_000` | `5000` |
[worker()](#t:worker/0)
Process id of `worker`.
[Functions](#functions)
---
[child_spec(init_arg)](#child_spec/1)
Returns a specification to start this module under a supervisor.
[get_debug_info(pool_id)](#get_debug_info/1)
Returns detailed information about started pool.
[get_state(pool_id)](#get_state/1)
Returns current state of started pool.
[run(pool_id, fun, options \\ [])](#run/3)
The main function for working with the pool.
[start(opts)](#start/1)
Starts a Poolex process without links (outside of a supervision tree).
[start_link(opts)](#start_link/1)
Starts a Poolex process linked to the current process.
[Types](#types)
===
[Functions](#functions)
===
Poolex.Caller
===
Caller structure.
**Callers** are processes that have requested to get a worker.
[Summary](#summary)
===
[Types](#types)
---
[t()](#t:t/0)
[Types](#types)
===
Poolex.Callers.Behaviour behaviour
===
Behaviour for callers collection implementations.
`caller` is a process that uses the [`Poolex.run/3`](Poolex.html#run/3) function and waits for the execution result.
**Note that the caller's typespec matches `GenServer.from()`**
[Summary](#summary)
===
[Types](#types)
---
[state()](#t:state/0)
[Callbacks](#callbacks)
---
[add(state, t)](#c:add/2)
Adds caller to `state` and returns new state.
[empty?(state)](#c:empty?/1)
Returns `true` if the `state` is empty, `false` otherwise.
[init()](#c:init/0)
Returns `state` (any data structure) which will be passed as the first argument to all other functions.
[pop(state)](#c:pop/1)
Removes one of callers from `state` and returns it as `{caller, state}`. Returns `:empty` if state is empty.
[remove_by_pid(state, caller_pid)](#c:remove_by_pid/2)
Removes caller by pid from `state` and returns new state.
[remove_by_reference(state, reference)](#c:remove_by_reference/2)
Removes caller by reference from `state` and returns new state.
[to_list(state)](#c:to_list/1)
Returns list of callers.
[Types](#types)
===
[Callbacks](#callbacks)
===
Poolex.Callers.Impl.ErlangQueue
===
Callers queue (FIFO) implementation based on `:erlang.queue`.
Poolex.Private.DebugInfo
===
Information with the current state of the pool.
Can be used for debugging.
[Summary](#summary)
===
[Types](#types)
---
[t()](#t:t/0)
[Types](#types)
===
Poolex.Private.State
===
Internal structure containing the state of the pool.
Can be used for debugging.
[Summary](#summary)
===
[Types](#types)
---
[t()](#t:t/0)
[Types](#types)
===
Poolex.Workers.Behaviour behaviour
===
Behaviour for worker collection implementations.
[Summary](#summary)
===
[Types](#types)
---
[state()](#t:state/0)
[Callbacks](#callbacks)
---
[add(state, worker)](#c:add/2)
Adds worker's pid to `state` and returns new state.
[count(state)](#c:count/1)
Returns the number of workers in the state.
[empty?(state)](#c:empty?/1)
Returns `true` if the `state` is empty, `false` otherwise.
[init()](#c:init/0)
Returns `state` (any data structure) which will be passed as the first argument to all other functions.
[init(list)](#c:init/1)
Same as `init/0` but returns `state` initialized with passed list of workers.
[member?(state, worker)](#c:member?/2)
Returns `true` if given worker contained in the `state`, `false` otherwise.
[pop(state)](#c:pop/1)
Removes one of workers from `state` and returns it as `{caller, state}`. Returns `:empty` if state is empty.
[remove(state, worker)](#c:remove/2)
Removes given worker from `state` and returns new state.
[to_list(state)](#c:to_list/1)
Returns list of workers pids.
[Types](#types)
===
[Callbacks](#callbacks)
===
Poolex.Workers.Impl.ErlangQueue
===
Simple workers queue (FIFO) implementation based on Erlang `:queue`
Poolex.Workers.Impl.List
===
Simple workers stack (LIFO) implementation based on List.
Contributing to Poolex
===
Poolex is written in [Elixir](https://elixir-lang.org/).
For branching management, this project uses [git-flow](https://github.com/petervanderdoes/gitflow-avh). The `main` branch is reserved for releases: the development process occurs on the `develop` and `feature` branches. Please never commit to `main`.
You can use [asdf](https://asdf-vm.com/) to set up the required Elixir and OTP. Current versions are listed in the `.tool-versions` file.
[Setup](#setup)
---
###
[Local repository](#local-repository)
1. Fork the repository.
2. Clone your fork to a local repository:
```
git clone https://github.com/your-login/poolex.git
cd poolex
```
3. Checkout `develop`:
```
git checkout develop
```
###
[Development environment (using asdf)](#development-environment-using-asdf)
1. Install asdf by [Getting Started guideline](https://asdf-vm.com/guide/getting-started.html)
2. Add plugins for elixir and OTP
```
asdf plugin-add elixir https://github.com/asdf-vm/asdf-elixir.git
asdf plugin add erlang https://github.com/asdf-vm/asdf-erlang.git
```
3. Install tools:
```
cd poolex
asdf install
```
###
[Development environment (without asdf)](#development-environment-without-asdf)
Please see [installation instructions](https://elixir-lang.org/install.html).
###
[Git-flow](#git-flow)
If you want to use the `git-flow` CLI, please check [installation instructions](https://github.com/petervanderdoes/gitflow-avh/wiki/Installation).
###
[Building the project](#building-the-project)
1. Fetch the project dependencies:
```
cd poolex
mix deps.get
```
2. Run the static analyzers:
```
mix check
```
[Workflow](#workflow)
---
To make a change, please use this workflow:
1. Checkout `develop` and apply the last upstream changes (use rebase, not merge!):
```
git checkout develop
git fetch --all --prune
git rebase upstream/develop
```
2. For a tiny patch, create a new branch with an explicit name:
```
git checkout -b <my_branch>
```
Alternatively, if you are working on a feature that would need more work, you can create a feature branch with `git-flow`:
```
git flow feature start <my_feature>
```
*Note: always open an issue and ask before starting a big feature, to avoid it not being merged and your time lost.*
3. When your feature is ready, feel free to use [interactive rebase](https://help.github.com/articles/about-git-rebase/) so your history looks clean and easy to follow. Then, apply the last upstream changes on `develop` to prepare integration:
```
git checkout develop
git fetch --all --prune
git rebase upstream/develop
```
4. If there were commits on `develop` since the beginning of your feature branch, integrate them by **rebasing** if your branch has few commits, or merging if you had a long-lived branch:
```
git checkout <my_feature_branch>
git rebase develop
```
*Note: the only case you should merge is when you are working on a big feature. If it is the case, we should have discussed this before as stated above.*
5. Run the tests and static analyzers to ensure there is no regression and all works as expected:
```
mix check
```
6. If it’s all good, open a pull request to merge your branch into the `develop` branch on the main repository.
[Coding style](#coding-style)
---
Please format your code with [`mix format`](https://hexdocs.pm/mix/Mix.Tasks.Format.html) or your editor and follow
[this style guide](https://github.com/christopheradams/elixir_style_guide).
All contributed code must be documented and functions must have typespecs. In general, take your inspiration from the existing code.
[API Reference](api-reference.html)
[Next Page →
Poolex](readme.html) |
@types/word-extractor | npm | JavaScript | [Installation](#installation)
===
> `npm install --save @types/word-extractor`
[Summary](#summary)
===
This package contains type definitions for word-extractor (<https://github.com/morungos/node-word-extractor>).
[Details](#details)
===
Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/word-extractor>.
[index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/word-extractor/index.d.ts)
---
```
declare class WordExtractor {
extract(documentPath: string | Uint8Array): Promise<WordExtractor.Document>;
}
export = WordExtractor;
declare namespace WordExtractor {
class Document {
getBody(): string;
getFootnotes(): string;
getHeaders(options?: { includeFooters?: boolean | undefined }): string;
getFooters(): string;
getAnnotations(): string;
getTextboxes(
options?: { includeHeadersAndFooters?: boolean | undefined; includeBody?: boolean | undefined },
): string;
getEndnotes(): string;
}
}
```
### [Additional Details](#additional-details)
* Last updated: Wed, 18 Oct 2023 18:04:04 GMT
* Dependencies: none
[Credits](#credits)
===
These definitions were written by [<NAME>](https://github.com/saboya).
Readme
---
### Keywords
none |
keyholder | cran | R | Package ‘keyholder’
March 11, 2023
Title Store Data About Rows
Version 0.1.7
Description Tools for keeping track of information, named
``keys'', about rows of data frame like objects. This is done by
creating special attribute ``keys'' which is updated after every change
in rows (subsetting, ordering, etc.). This package is designed to
work tightly with 'dplyr' package.
License MIT + file LICENSE
URL https://echasnovski.github.io/keyholder/,
https://github.com/echasnovski/keyholder/
BugReports https://github.com/echasnovski/keyholder/issues/
Depends R (>= 3.4.0)
Imports dplyr (>= 0.7.0), rlang (>= 0.1), tibble, utils
Suggests covr, knitr, rmarkdown, testthat
VignetteBuilder knitr
Encoding UTF-8
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-1617-4019>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-03-11 18:00:02 UTC
R topics documented:
keyholder-packag... 2
key-by-scope... 3
keyed-d... 4
keyed-df-one-tb... 5
keyed-df-two-tb... 6
keyholder-i... 7
keyholder-scope... 8
keyholder-supported-fun... 9
keys-ge... 9
keys-manipulat... 10
keys-se... 11
remove-keys-scope... 13
rename-keys-scope... 14
restore-keys-scope... 14
keyholder-package keyholder: Store Data About Rows
Description
keyholder offers a set of tools for storing information about rows of data frame like objects. The
common use cases are:
• Track rows of data frame without changing it.
• Store columns for future restoring in data frame.
• Hide columns for convenient use of dplyr’s *_if scoped variants of verbs.
Details
To learn more about keyholder:
• Browse vignettes with browseVignettes(package = "keyholder").
• Look how to set keys.
• Look at the list of supported functions.
Author(s)
Maintainer: <NAME> <<EMAIL>> (ORCID)
See Also
Useful links:
• https://echasnovski.github.io/keyholder/
• https://github.com/echasnovski/keyholder/
• Report bugs at https://github.com/echasnovski/keyholder/issues/
key-by-scoped Key by selection of variables
Description
These functions perform keying by selection of variables using corresponding scoped variant of
select. Appropriate data frame is selected with scoped function first, and then it is assigned as keys.
Usage
key_by_all(.tbl, .funs = list(), ..., .add = FALSE, .exclude = FALSE)
key_by_if(.tbl, .predicate, .funs = list(), ..., .add = FALSE,
.exclude = FALSE)
key_by_at(.tbl, .vars, .funs = list(), ..., .add = FALSE,
.exclude = FALSE)
Arguments
.tbl Reference data frame .
.funs Parameter for scoped functions.
... Parameter for scoped functions.
.add Whether to add keys to (possibly) existing ones. If FALSE keys will be overrid-
den.
.exclude Whether to exclude key variables from .tbl.
.predicate Parameter for scoped functions.
.vars Parameter for scoped functions.
See Also
Not scoped key_by()
Examples
mtcars %>% key_by_all(.funs = toupper)
mtcars %>% key_by_if(rlang::is_integerish, toupper)
mtcars %>% key_by_at(c("vs", "am"), toupper)
keyed-df Keyed object
Description
Utility functions for keyed objects which are implemented with class keyed_df. Keyed object
should be a data frame which inherits from keyed_df and contains a data frame of keys in attribute
’keys’.
Usage
is_keyed_df(.tbl)
is.keyed_df(.tbl)
## S3 method for class 'keyed_df'
print(x, ...)
## S3 method for class 'keyed_df'
x[i, j, ...]
Arguments
.tbl Object to check.
x Object to print or extract elements.
... Further arguments passed to or from other methods.
i, j Arguments for [.
Examples
is_keyed_df(mtcars)
mtcars %>% key_by(vs) %>% is_keyed_df
# Not valid keyed_df
df <- mtcars
class(df) <- c("keyed_df", "data.frame")
is_keyed_df(df)
keyed-df-one-tbl One-table verbs from dplyr for keyed_df
Description
Defined methods for dplyr generic single table functions. Most of them preserve ’keyed_df’ class
and ’keys’ attribute (excluding summarise with scoped variants, distinct and do which remove
them). Also these methods modify rows in keys according to the rows modification in reference
data frame (if any).
Usage
## S3 method for class 'keyed_df'
select(.data, ...)
## S3 method for class 'keyed_df'
rename(.data, ...)
## S3 method for class 'keyed_df'
mutate(.data, ...)
## S3 method for class 'keyed_df'
transmute(.data, ...)
## S3 method for class 'keyed_df'
summarise(.data, ...)
## S3 method for class 'keyed_df'
group_by(.data, ...)
## S3 method for class 'keyed_df'
ungroup(x, ...)
## S3 method for class 'keyed_df'
rowwise(data, ...)
## S3 method for class 'keyed_df'
distinct(.data, ..., .keep_all = FALSE)
## S3 method for class 'keyed_df'
do(.data, ...)
## S3 method for class 'keyed_df'
arrange(.data, ..., .by_group = FALSE)
## S3 method for class 'keyed_df'
filter(.data, ...)
## S3 method for class 'keyed_df'
slice(.data, ...)
Arguments
.data, data, x A keyed object.
... Appropriate arguments for functions.
.keep_all Parameter for dplyr::distinct.
.by_group Parameter for dplyr::arrange.
Details
dplyr::transmute() is supported implicitly with dplyr::mutate() support.
dplyr::rowwise() is not supposed to be generic in dplyr. Use rowwise.keyed_df directly.
All scoped variants of present functions are also supported.
See Also
Two-table verbs
Examples
mtcars %>% key_by(vs, am) %>% dplyr::mutate(gear = 1)
keyed-df-two-tbl Two-table verbs from dplyr for keyed_df
Description
Defined methods for dplyr generic join functions. All of them preserve ’keyed_df’ class and ’keys’
attribute of the first argument. Also these methods modify rows in keys according to the rows
modification in first argument (if any).
Usage
## S3 method for class 'keyed_df'
inner_join(x, y, by = NULL, copy = FALSE,
suffix = c(".x", ".y"), ...)
## S3 method for class 'keyed_df'
left_join(x, y, by = NULL, copy = FALSE,
suffix = c(".x", ".y"), ...)
## S3 method for class 'keyed_df'
right_join(x, y, by = NULL, copy = FALSE,
suffix = c(".x", ".y"), ...)
## S3 method for class 'keyed_df'
full_join(x, y, by = NULL, copy = FALSE,
suffix = c(".x", ".y"), ...)
## S3 method for class 'keyed_df'
semi_join(x, y, by = NULL, copy = FALSE, ...)
## S3 method for class 'keyed_df'
anti_join(x, y, by = NULL, copy = FALSE, ...)
Arguments
x, y, by, copy, suffix, ...
Parameters for join functions.
See Also
One-table verbs
Examples
dplyr::band_members %>% key_by(band) %>%
dplyr::semi_join(dplyr::band_instruments, by = "name") %>%
keys()
keyholder-id Add id column and key
Description
Functions for creating id column and key.
Usage
use_id(.tbl)
compute_id_name(x)
add_id(.tbl)
key_by_id(.tbl, .add = FALSE, .exclude = FALSE)
Arguments
.tbl Reference data frame.
x Character vector of names.
.add, .exclude Parameters for key_by().
Details
use_id() assigns as keys a tibble with column ’.id’ and row numbers of .tbl as values.
compute_id_name() computes the name which is different from every element in x by the follow-
ing algorithm: if ’.id’ is not present in x it is returned; if taken - ’.id1’ is checked; if taken - ’.id11’
is checked and so on.
add_id() creates a column with unique name (computed with compute_id_name()) and row num-
bers as values (grouping is ignored). After that puts it as first column.
key_by_id() is similar to add_id(): it creates a column with unique name and row numbers as
values (grouping is ignored) and calls key_by() function to use this column as key. If .add is
FALSE unique name is computed based on .tbl column names; if TRUE then based on .tbl and its
keys column names.
Examples
mtcars %>% use_id()
mtcars %>% add_id()
mtcars %>% key_by_id(.exclude = TRUE)
keyholder-scoped Operate on a selection of keys
Description
keyholder offers scoped variants of the following functions:
• key_by(). See key_by_all().
• remove_keys(). See remove_keys_all().
• restore_keys(). See restore_keys_all().
• rename_keys(). See rename_keys_all().
Arguments
.funs Parameter for scoped functions.
.vars Parameter for scoped functions.
.predicate Parameter for scoped functions.
... Parameter for scoped functions.
See Also
Not scoped manipulation functions
Not scoped key_by()
keyholder-supported-funs
Supported functions
Description
keyholder supports the following functions:
• Base subsetting with [.
• dplyr one table verbs.
• dplyr two table verbs.
keys-get Get keys
Description
Functions for getting information about keys.
Usage
keys(.tbl)
raw_keys(.tbl)
has_keys(.tbl)
Arguments
.tbl Reference data frame.
Value
keys() always returns a tibble of keys. In case of no keys it returns a tibble with number of rows
as in .tbl and zero columns. raw_keys() is just a wrapper for attr(.tbl, "keys"). To know
whether .tbl has keys use has_keys().
See Also
Set keys, Manipulate keys
Examples
keys(mtcars)
raw_keys(mtcars)
has_keys(mtcars)
df <- key_by(mtcars, vs, am)
keys(df)
has_keys(df)
keys-manipulate Manipulate keys
Description
Functions to manipulate keys.
Usage
remove_keys(.tbl, ..., .unkey = FALSE)
restore_keys(.tbl, ..., .remove = FALSE, .unkey = FALSE)
pull_key(.tbl, var)
rename_keys(.tbl, ...)
Arguments
.tbl Reference data frame.
... Variables to be used for operations defined in similar fashion as in dplyr::select().
.unkey Whether to unkey() .tbl in case there are no keys left.
.remove Whether to remove keys after restoring.
var Parameter for dplyr::pull().
Details
remove_keys() removes keys defined with ....
restore_keys() transfers keys defined with ... into .tbl and removes them from keys if .remove
== TRUE. If .tbl is grouped the following happens:
• If restored keys don’t contain grouping variables then groups don’t change;
• If restored keys contain grouping variables then result will be regrouped based on restored
values. In other words restoring keys beats ’not-modifying’ grouping variables rule. It is
made according to the ideology of keys: they contain information about rows and by restoring
you want it to be available.
pull_key() extracts one specified column from keys with dplyr::pull().
rename_keys() renames columns in keys using dplyr::rename().
See Also
Get keys, Set keys
Scoped functions
Examples
df <- mtcars %>% dplyr::as_tibble() %>%
key_by(vs, am, .exclude = TRUE)
df %>% remove_keys(vs)
df %>% remove_keys(dplyr::everything())
df %>% remove_keys(dplyr::everything(), .unkey = TRUE)
df %>% restore_keys(vs)
df %>% restore_keys(vs, .remove = TRUE)
df %>% restore_keys(dplyr::everything(), .remove = TRUE)
df %>% restore_keys(dplyr::everything(), .remove = TRUE, .unkey = TRUE)
# Restoring on grouped data frame
df_grouped <- df %>% dplyr::mutate(vs = 1) %>% dplyr::group_by(vs)
df_grouped %>% restore_keys(dplyr::everything())
# Pulling
df %>% pull_key(vs)
# Renaming
df %>% rename_keys(Vs = vs)
keys-set Set keys
Description
Key is a vector which goal is to provide information about rows in reference data frame. Its length
should always be equal to number of rows in data frame. Keys are stored as tibble in attribute
"keys" and so one data frame can have multiple keys. Data frame with keys is implemented as
class keyed_df.
Usage
keys(.tbl) <- value
assign_keys(.tbl, value)
key_by(.tbl, ..., .add = FALSE, .exclude = FALSE)
unkey(.tbl)
Arguments
.tbl Reference data frame .
value Values of keys (converted to tibble).
... Variables to be used as keys defined in similar fashion as in dplyr::select().
.add Whether to add keys to (possibly) existing ones. If FALSE keys will be overrid-
den.
.exclude Whether to exclude key variables from .tbl.
Details
key_by ignores grouping when creating keys. Also if .add == TRUE and names of some added keys
match the names of existing keys the new ones will override the old ones.
Value for keys<- should not be NULL because it is converted to tibble with zero rows. To remove
keys use unkey(), remove_keys() or restore_keys(). assign_keys is a more suitable for piping
wrapper for keys<-.
See Also
Get keys, Manipulate keys
Scoped key_by()
Examples
df <- dplyr::as_tibble(mtcars)
# Value is converted to tibble
keys(df) <- 1:nrow(df)
# This will throw an error
## Not run:
keys(df) <- 1:10
## End(Not run)
# Use 'vs' and 'am' as keys
df %>% key_by(vs, am)
df %>% key_by(vs, am, .exclude = TRUE)
df %>% key_by(vs) %>% key_by(am, .add = TRUE, .exclude = TRUE)
# Override keys
df %>% key_by(vs, am) %>% dplyr::mutate(vs = 1) %>%
key_by(gear, vs, .add = TRUE)
# Use select helpers
df %>% key_by(dplyr::one_of(c("vs", "am")))
df %>% key_by(dplyr::everything())
remove-keys-scoped Remove selection of keys
Description
These functions remove selection of keys using corresponding scoped variant of select. .funs
argument is removed because of its redundancy.
Usage
remove_keys_all(.tbl, ..., .unkey = FALSE)
remove_keys_if(.tbl, .predicate, ..., .unkey = FALSE)
remove_keys_at(.tbl, .vars, ..., .unkey = FALSE)
Arguments
.tbl Reference data frame.
... Parameter for scoped functions.
.unkey Whether to unkey() .tbl in case there are no keys left.
.predicate Parameter for scoped functions.
.vars Parameter for scoped functions.
Examples
df <- mtcars %>% dplyr::as_tibble() %>% key_by(vs, am, disp)
df %>% remove_keys_all()
df %>% remove_keys_all(.unkey = TRUE)
df %>% remove_keys_if(rlang::is_integerish)
df %>% remove_keys_at(c("vs", "am"))
rename-keys-scoped Rename selection of keys
Description
These functions rename selection of keys using corresponding scoped variant of rename.
Usage
rename_keys_all(.tbl, .funs = list(), ...)
rename_keys_if(.tbl, .predicate, .funs = list(), ...)
rename_keys_at(.tbl, .vars, .funs = list(), ...)
Arguments
.tbl Reference data frame.
.funs Parameter for scoped functions.
... Parameter for scoped functions.
.predicate Parameter for scoped functions.
.vars Parameter for scoped functions.
restore-keys-scoped Restore selection of keys
Description
These functions restore selection of keys using corresponding scoped variant of select. .funs argu-
ment can be used to rename some keys (without touching actual keys) before restoring.
Usage
restore_keys_all(.tbl, .funs = list(), ..., .remove = FALSE,
.unkey = FALSE)
restore_keys_if(.tbl, .predicate, .funs = list(), ..., .remove = FALSE,
.unkey = FALSE)
restore_keys_at(.tbl, .vars, .funs = list(), ..., .remove = FALSE,
.unkey = FALSE)
Arguments
.tbl Reference data frame.
.funs Parameter for scoped functions.
... Parameter for scoped functions.
.remove Whether to remove keys after restoring.
.unkey Whether to unkey() .tbl in case there are no keys left.
.predicate Parameter for scoped functions.
.vars Parameter for scoped functions.
Examples
df <- mtcars %>% dplyr::as_tibble() %>% key_by(vs, am, disp)
# Just restore all keys
df %>% restore_keys_all()
# Restore all keys with renaming and without touching actual keys
df %>% restore_keys_all(.funs = toupper)
# Restore with renaming and removing
df %>%
restore_keys_all(.funs = toupper, .remove = TRUE)
# Restore with renaming, removing and unkeying
df %>%
restore_keys_all(.funs = toupper, .remove = TRUE, .unkey = TRUE)
# Restore with renaming keys satisfying the predicate
df %>%
restore_keys_if(rlang::is_integerish, .funs = toupper)
# Restore with renaming specified keys
df %>%
restore_keys_at(c("vs", "disp"), .funs = toupper) |
Rook | cran | R | Package ‘Rook’
November 7, 2022
Title HTTP Web Server for R
Version 1.2
Date 2022-11-05
Description An HTTP web server for R with a documented API to interface be-
tween R and the server. The documentation contains the Rook specification and details for build-
ing and running Rook applications. To get started, be sure and read the 'Rook' help file first.
Encoding UTF-8
Depends R (>= 2.13.0)
Imports utils, tools, methods, brew
License GPL-2
LazyLoad yes
URL https://github.com/evanbiederstedt/rook
BugReports https://github.com/evanbiederstedt/rook/issues
Author <NAME> [aut], <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
NeedsCompilation yes
Repository CRAN
Date/Publication 2022-11-07 08:50:19 UTC
R topics documented:
Rook-packag... 2
App-clas... 4
Brewery-clas... 5
Builder-clas... 6
File-clas... 7
is_rookabl... 8
Middleware-clas... 8
Mime-clas... 9
Multipart-clas... 10
Redirect-clas... 11
Request-clas... 11
Response-clas... 14
Rhttpd-clas... 15
RhttpdApp-clas... 17
RhttpdErrorStream-clas... 18
RhttpdInputStream-clas... 18
Serve... 19
Static-clas... 19
suspend_consol... 19
URLMap-clas... 20
Utils-clas... 21
Rook-package Rook: A web server interface and package for R
Description
This help page defines the Rook specification. It borrows heavily from Ruby’s Rack project: https:
//github.com/rack/rack.
After reading this document, read the Rhttpd help file as it will get you familiar with installing and
running Rook applications. Then explore the example applications located in:
system.file('exampleApps',package='Rook').
Rook applications
A Rook application is an R reference class object that implements a ’call’ method or an R closure
that takes exactly one argument, an environment, and returns a list with three named elements:
'status', 'headers', and 'body'.
Hello World
Here is a basic Rook application as a closure that implements ’hello world’:
function(env){
body = paste('<h1>Hello World! This is Rook',env$rook.version,'.</h1>')
list(
status = 200L,
headers = list(
'Content-Type' = 'text/html'
),
body = body
)
}
And the equivalent reference class example:
setRefClass(
'HelloWorld',
methods = list(
call = function(env){
list(
status = 200L,
headers = list(
'Content-Type' = 'text/html'
),
body = paste('<h1>Hello World! This is Rook',env$rook.version,'.</h1>')
)
}
)
)
The Environment
The environment argument is a true R environment object which the application is free to modify.
It is required to contain the following variables:
REQUEST_METHOD The HTTP request method, such as "GET" or "POST". This cannot ever
be an empty string, and so is always required.
SCRIPT_NAME The initial portion of the request URL’s "path" that corresponds to the applica-
tion object, so that the application knows its virtual "location". This may be an empty string,
if the application corresponds to the "root" of the server.
PATH_INFO The remainder of the request URL’s "path", designating the virtual "location" of the
request’s target within the application. This may be an empty string, if the request URL targets
the application root and does not have a trailing slash. This value may be percent-encoded
when I originating from a URL.
QUERY_STRING The portion of the request URL that follows the ?, if any. May be empty, but
is always required!
SERVER_NAME, SERVER_PORT When combined with SCRIPT_NAME and PATH_INFO,
these variables can be used to complete the URL. Note however that HTTP_HOST, if present,
should be used in preference to SERVER_NAME for reconstructing the request URL. SERVER_NAME
and SERVER_PORT can never be empty strings, and so are always required.
HTTP_ Variables Variables corresponding to the client-supplied HTTP request headers (i.e., vari-
ables whose names begin with HTTP_). The presence or absence of these variables should
correspond with the presence or absence of the appropriate HTTP header in the request.
In addition, the environment must include the following Rook-specific variables:
rook.version This version of Rook.
rook.url_scheme ’http’ or ’https’, depending on the request URL.
rook.input See “The Input Stream” section.
rook.errors See “The Error Stream” section.
The Input Stream
The rook.input variable must contain an object created from a reference class that implements
read_lines(), read(), and rewind():
read_lines(l=-1L): takes one argument, the number of lines to read. Includes partial ending
line.
read(l=-1L): takes one argument, the number of bytes to read. Returns a raw vector.
rewind(): Rewinds the input stream back to the beginning.
The Error Stream
The rook.error variable must contain an object created from a reference class that implements
flush() and cat():
flush(): called with no arguments and makes the error stream immediately appear.
cat(...,sep=" ",fill=FALSE,labels=NULL): called with the same arguments as R’s "cat" with-
out the file and append argument.
The Response
Rook applications return a list with three named elements: 'status', 'headers', and 'body'.
’status’: An HTTP status value as integer and must be greater than or equal to 100.
’headers’: A named list that contains only character values corresponding to valid HTTP
headers.
’body’: Either a character or raw vector. If the character vector is named with value 'file'
then value of the vector is interpreted as the location of a file.
Author(s)
<NAME> <<EMAIL>>
App-class Class App
Description
Abstract class from which Middleware and Builder inherit. Provides the app field.
App can also be used to instantiate reference classed applications wrapped around a function. See
Middleware for an example.
Fields
app: A Rook application.
Methods
new(app=NULL): Creates a new App object. app is any Rook aware R object.
See Also
is_rookable, Builder, and Middleware.
Brewery-class Class Brewery
Description
A Middleware class for mapping URLs to a directory of files that are subsequently passed to brew.
When a file is brewed, the two variables req (an object of class Request) and res (an object of
class Response) are available for use.
Methods
new(url,root,...): url is a character string or regexp on which to match, root is the name of
the directory where brew files reside. Named arguments can be passed in via ... and will be
available within the scope of each brewed file.
See Also
Rhttpd, Builder, Redirect, and brew.
Examples
#
# This application runs any file found in tempdir() through brew.
#
s <- Rhttpd$new()
## Not run:
s$start(quiet=TRUE)
## End(Not run)
cat("<h1>Random Number: <%=rnorm(1)%></h1>",
file=file.path(tempdir(),"index.html"))
s$add(name="random",
app=Builder$new(
Brewery$new(url="/",root=tempdir()),
Redirect$new("/index.html")
)
)
## Not run:
s$browse('random') # Opens a browser window to the app.
## End(Not run)
file.remove(file.path(tempdir(),"index.html"))
s$remove(all=TRUE)
rm(s)
Builder-class Class Builder
Description
A convenience object for combining various Middleware with a default application to create a more
complex Rook application.
Methods
new(...): Arguments can be any Middleware object while the last argument in the list must be
a valid Rook application. That is, it will handle the incoming request without deferring to
another application.
See Also
Rhttpd, Static, Brewery, and Redirect.
Examples
# The following is the Hmisc example. Explore the folder
# system.file('exampleApps/Hmisc',package='Rook') for more information.
s <- Rhttpd$new()
## Not run:
library(Hmisc)
dir.create(file.path(tempdir(),'plots'),showWarnings=FALSE)
s$add( name="Hmisc",
app=Builder$new(
Static$new(
urls = c('/css','/images','/javascript'),
root = system.file('exampleApps/Hmisc',package='Rook')
),
Static$new(urls='/plots',root=tempdir()),
Brewery$new(
url='/brew',
root= system.file('exampleApps/Hmisc',package='Rook'),
imagepath=file.path(tempdir(),'plots'),
imageurl='../plots/'
),
Redirect$new('/brew/useR2007.rhtml')
)
)
s$start(quiet=TRUE)
s$browse('Hmisc') # Opens a browser window to the application.
s$remove(all=TRUE)
s$stop()
## End(Not run)
File-class Class File
Description
A Rook application that serves static files from a root directory, according to the path info of the
Rook request.
Methods
new(root): root is the name of the directory from where to serve files.
See Also
Rhttpd.
Examples
# This example serves all your files in /etc (on UNIX and Mac only).
#
# Note that when you open the application, you will see the word
# 'Forbidden'. "File" doesn't serve directories, so you must amend the
# url in the location bar with the file you want to view. Try adding /passwd.
s <- Rhttpd$new()
## Not run:
s$start(quiet=TRUE)
## End(Not run)
s$add(name="etc",app=File$new('/etc'))
## Not run:
s$browse('etc') # Opens a browser window to the app.
## End(Not run)
s$remove(all=TRUE)
rm(s)
is_rookable Test for Rookable applications
Description
A convenience function for testing whether or not objects are either a function or reference class as
defined by the Rook specification for applications.
Usage
is_rookable(app)
Arguments
app Any R object.
Value
Logical determining whether or not argument is Rookable. Not vectorized.
See Also
Rook.
Middleware-class Class Middleware
Description
An abstract class for building Rook Middleware applications. Middleware applications either han-
dle the incoming web request or hand off the request to the Rook app defined in the field of the
same name.
Methods
set_app(app): app is a Rook application that will handle the request if this Middleware app does
not.
See Also
The following classes implement Middleware: Brewery and Static.
Examples
# Middleware applications are typically instantiated in the argument list of
# Builder$new(), but here is stand-alone example.
#
# Once your browser loads the app, you will see something like this in
# your location bar: http://127.0.0.1:28649/custom/middle. Add '/foo'
# onto the end of that and reload.
setRefClass(
'FooBar',
contains = 'Middleware',
methods = list(
initialize = function(...){
# app to defer to.
callSuper(app=App$new(function(env){
res <- Response$new()
res$write("<h1>I'm the deferred app.</h1>")
res$finish()
}))
},
call = function(env){
req <- Request$new(env)
res <- Response$new()
if (length(grep('foo',req$path_info()))){
res$write("<h1>I'm the middleware app.</h1>")
return(res$finish())
} else {
app$call(env)
}
}
)
)
s <- Rhttpd$new()
## Not run:
s$start(quiet=TRUE)
## End(Not run)
s$add(name="middle",app=getRefClass('FooBar')$new())
## Not run:
s$browse('middle') # Opens a browser window to the app.
## End(Not run)
s$remove(all=TRUE)
rm(s)
Mime-class Class Mime and object Mime
Description
A convenience object for determining the MIME type of a file name.
Methods
file_extname(fname=NULL): Returns the file extensions for the given file.
mime_type(ext=NULL, fallback=’application/octet-stream’): Returns the MIME type given
the file extension. Be sure to include the dot character in ext. If no match is found, then the
fallback MIME type is returned.
Examples
Mime$file_extname('foo.png')
Mime$mime_type('.png')
Multipart-class Class Multipart and object Multipart
Description
A convenience object for parsing multipart/form-data POST payloads.
Methods
parse(env): Returns parsed POST payload as a named list. env is an environment created by
Rhttpd and conforms to the Rook specification.
See Also
Rhttpd, Request, and Response.
Examples
s <- Rhttpd$new()
## Not run:
s$start(quiet=TRUE)
## End(Not run)
s$add(name="multi",
app=function(env){
req <- Request$new(env)
res <- Response$new()
res$write('<form enctype="multipart/form-data" method=POST>')
res$write('Upload a file: <input type=file name=fileUpload>')
res$write('<input type=submit></form><br>')
post <- Multipart$parse(env)
if (length(post)){
poststr <- paste(capture.output(str(post),file=NULL),collapse='\n')
res$write(c('<pre>',poststr,'</pre>'))
}
res$finish()
}
)
## Not run:
s$browse('multi') # Opens a browser window to the app.
## End(Not run)
s$remove(all=TRUE)
rm(s)
Redirect-class Class Redirect
Description
A Rook application whose only role is to return an HTTP redirect header to the given url.
Methods
new(url): Returns a Rook object. url is a character string whose value is a full or relative url to
which the browser is redirected.
See Also
See Brewery for an example.
Request-class Class Request
Description
A convenience class for working with a Rook environment. Be sure to see the example at the end of
this help file.
Methods
parseable_data(): Returns a boolean value determining if the POST payload is parseable.
url(): Returns url as a character string containing the scheme, host, port, and possibly the GET
query string if supplied.
request_method(): Returns the HTTP method as a character string, e.g. ’GET’, ’POST’, etc.
GET(): Returns a named list containing the variables parsed from the query string.
post(): Returns TRUE if the current request method is ’POST’, FALSE otherwise.
new(env): Instantiates a new Request object for the given Rook environment.
media_type(): Returns the media type for the current request as a character string.
query_string(): Returns the unparsed query string.
fullpath(): Returns the same string as url() but without the scheme, host, and port.
referer() or referrer(): Returns the referring url.
cookies(): Returns any cookies in the request as a named list.
content_charset(): Returns the content charset as a character string.
head(): Returns TRUE if the HTTP method is ’HEAD’, FALSE otherwise.
accept_encoding(): Returns the accept encoding header as a character string.
content_length(): Returns content length header value as a string.
form_data(): Returns TRUE if there’s form data, e.g. POST data with the request, FALSE other-
wise.
xhr(): Returns the x-requested-with header value as a character string.
params(): Returns the combination of POST() and GET() in one named list.
media_type_params(): Returns any media type parameters from the content type as a named list.
user_agent(): Returns the user-agent header value as a character string.
put(): Returns TRUE if the current request is a ’PUT’.
get(): Returns TRUE if the current request is a ’GET’.
path(): Returns a character string like fullpath() but without the query string.
body(): Returns the ’rook.input’ object from the environment. See RhttpdInputStream for more
information.
port(): Returns the server port as an integer.e
host_with_port(): Returns the host and port as a character string separated by ’:’.
scheme(): Returns the scheme, e.g. ’http’ or ’https’, as a character string.
ip(): Returns the remote IP address as a character string.
options(): Returns TRUE if the current request is ’OPTIONS’.
to_url(url, ...): Concatenates the script name with the url argument along with any named
parameters passed via ... .
host(): Returns the server host as a character string.
POST(): Returns a named list containing the variables parsed from the POST payload.
trace(): Returns TRUE if the current request is ’TRACE’.
script_name(s=NULL): Returns the script name of the application, e.g. ’/custom/multi’. Also, if
s is not NULL, sets the script name to s.
content_type(): Returns the content-type header value as a character string.
delete(): Returns TRUE if the current request is ’DELETE’.
path_info(s=NULL): Returns the portion of the url after the script name as a character string. If s
is not NULL, sets the path info to s.
See Also
Rhttpd and Response.
Examples
#
# The following example prints out the result of each method.
#
ls_str <- function(s) paste(capture.output(str(s),file=NULL),collapse='\n')
s <- Rhttpd$new()
## Not run:
s$start(quiet=TRUE)
## End(Not run)
s$add(name="request",
app=function(env){
req <- Request$new(env)
res <- Response$new()
res$set_cookie('imacookie','42')
action <- req$to_url('/foo',bar=1,baz='three')
res$write('<form enctype="multipart/form-data" method=POST action="')
res$write(action)
res$write('">')
res$write('Upload a file: <input type=file name=fileUpload>')
res$write('<input type=submit></form><br><pre>')
res$write(c('parseable_data: ',req$parseable_data(),'\n'))
res$write(c('url: ',req$url(),'\n'))
res$write(c('request_method: ',req$request_method(),'\n'))
res$write(c('GET: ',ls_str(req$GET()),'\n'))
res$write(c('post: ',req$post(),'\n'))
res$write(c('media_type: ',req$media_type(),'\n'))
res$write(c('query_string: ',req$query_string(),'\n'))
res$write(c('fullpath: ',req$fullpath(),'\n'))
res$write(c('referer: ',req$referer(),'\n'))
res$write(c('cookies: ',ls_str(req$cookies()),'\n'))
res$write(c('content_charset: ',req$content_charset(),'\n'))
res$write(c('head: ',req$head(),'\n'))
res$write(c('accept_encoding: ',req$accept_encoding(),'\n'))
res$write(c('content_length: ',req$content_length(),'\n'))
res$write(c('form_data: ',req$form_data(),'\n'))
res$write(c('xhr: ',req$xhr(),'\n'))
res$write(c('params: ',ls_str(req$params()),'\n'))
res$write(c('media_type_params:\n',ls_str(req$media_type_params()),'\n'))
res$write(c('user_agent: ',req$user_agent(),'\n'))
res$write(c('put: ',req$put(),'\n'))
res$write(c('get: ',req$get(),'\n'))
res$write(c('path: ',req$path(),'\n'))
res$write(c('body: ',ls_str(req$body()),'\n'))
res$write(c('port: ',req$port(),'\n'))
res$write(c('host_with_port: ',req$host_with_port(),'\n'))
res$write(c('scheme: ',req$scheme(),'\n'))
res$write(c('ip: ',req$ip(),'\n'))
res$write(c('options: ',req$options(),'\n'))
res$write(c('to_url: ',req$to_url('foo',bar=1,baz='two'),'\n'))
res$write(c('host: ',req$host(),'\n'))
res$write(c('POST: ',ls_str(req$POST()),'\n'))
res$write(c('trace: ',req$trace(),'\n'))
res$write(c('script_name: ',req$script_name(),'\n'))
res$write(c('content_type: ',req$content_type(),'\n'))
res$write(c('delete: ',req$delete(),'\n'))
res$write(c('path_info: ',req$path_info(),'\n'))
res$write(c('\nRac env: ',ls_str(as.list(env)),'\n'))
res$finish()
}
)
## Not run:
s$browse('request') # Opens a browser window to the app.
## End(Not run)
s$remove(all=TRUE)
rm(s)
Response-class Class Response
Description
A convenience class for creating Rook responses.
Methods
header(key, value): Sets an HTTP header for the response. Both key and value must be char-
acter strings. If value is missing, then the header value is returned.
redirect(target, status=302): Sets up an HTTP redirect to the target url.
write(str): Takes a character vector and appends it to the response body.
new(body=’’, status=200, headers=list()): Create a new Response object. body is a charac-
ter vector, status is an HTTP status value. headers is a named list.
set_cookie(key, value): Sets an HTTP cookie for the response. Both key and value must be
character strings.
delete_cookie(key, value): Sends appropriate HTTP header to delete the associated cookie on
the client. key and value must be character strings.
finish(): Returns the response according to the Rook specification.
See Also
Rhttpd and Request.
Examples
s <- Rhttpd$new()
## Not run:
s$start(quiet=TRUE)
## End(Not run)
s$add(name="response",
app=function(env){
req <- Request$new(env)
res <- Response$new()
res$write('hello')
res$finish()
}
)
## Not run:
s$browse('response') # Opens a browser window to the app.
## End(Not run)
s$remove(all=TRUE)
rm(s)
Rhttpd-class Class Rhttpd
Description
Rhttpd is a convenience class for installing and running Rook applications. It hides the details of
starting and stopping the server and adding and removing Rook applications from the server.
Users starts by creating one Rhttpd object, then adding applications to it, and then starting the
server (see the section “Examples” for a typical session). There are no restrictions on creating more
than one server object, but know that it only manages the applications that are added to it and not
others.
Applications can be added and removed regardless of whether or not the server is running. Stopping
the server does not remove any applications. Adding an application with the same name as one
already installed simply overwrites the one installed. If the server is started with no applications
installed, it will install the application named RookTestApp located in:
system.file('exampleApps/RookTestApp.R',package='Rook').
Also, see browseURL to learn how to get R to automatically launch your favorite web browser.
NOTE: This version of Rook can only listen on the loopback device.
Methods
open(x) or browse(x): Calls browseURL on the installed Rook application designated by x. x is
either an integer or a character string. See the output of print().
print() or show(): Lists the installed Rook applications.
remove(app,all=FALSE): Removes the application known to the server. app can be an RhttpdApp
object previously added, the name of the application as a character string, or an index as a
numeric or integer value. See the output of print().
full_url(i): Returns the absolute url to the application for the given index.
start(listen=’127.0.0.1’, port=getOption(’help.ports’), quiet=FALSE): Starts the server
on the loopback device and port. listen is always character string. Note that if there
are no applications added to the object prior to starting, then the RookTestApp located in
system.file('exampleApps/RookTestApp.R',package='Rook') is automatically added.
new(): Create a new Rhttpd object.
launch(...): Combines the steps of starting the server, creating an RhttpdApp object, adding it to
the server, and opening the app in the browser. ... argument is passed to RhttpdApp$new().
debug(): Returns the integer value provided by getOption('Rhttpd_debug') or 0 if the option
is NULL.
stop(): Stops the server.
add(app=NULL,name=NULL): Adds a new Rook application to the server. app can be an RhttpdApp
object or any Rook application. name is a character string and is ignored if app is an RhttpdApp
object.
See Also
RhttpdApp
Examples
# Create an Rhttpd object and start the internal web server. Note that
# if there are no applications added, then the default RookTest app in
# system.file('exampleApps/RookTestApp.R',package='Rook') is automatically
# added.
s <- Rhttpd$new()
## Not run:
s$start(quiet=TRUE)
s$browse(1)
## End(Not run)
s$print()
# Be sure to install the Hmisc package before installing and running
# this application. You will want to; it's a pretty good one.
# s$add(
# app=system.file('exampleApps/Hmisc/config.R',package='Rook'),
# name='hmisc')
s$add(
app=system.file('exampleApps/helloworld.R',package='Rook'),
name='hello'
)
s$add(
app=system.file('exampleApps/helloworldref.R',package='Rook'),
name='helloref'
)
s$add(
app=system.file('exampleApps/summary.R',package='Rook'),
name='summary'
)
s$print()
# Stops the server but doesn't uninstall the app
## Not run:
s$stop()
## End(Not run)
s$remove(all=TRUE)
rm(s)
RhttpdApp-class Class RhttpdApp
Description
Creates a Rook application ready to add to an Rhttpd server.
Details
The internal web server allows dispatching to user-defined closures located in tools:::.httpd.handlers.env.
For instance, if a handler named ’foo’ is placed there, then the url path to that handler is /custom/foo.
RhttpdApp along with Rhttpd hide these details by allowing a user to create application objects
specifying only their name and the application. There is currently a limit of 63 characters or less
for application names.
NOTE: When a file is given as the value of the app argument to new(), it is monitored for timestamp
changes. If a change occurs in the modification time as returned by file.info, then the file is
sourced prior to handling subsequent requests.
Methods
new(app, name): Creates an object of class RhttpdApp. Argument app can be any Rook aware
object or it can be a location to a file whose source creates a Rook aware object. That object
must be named either 'app' or the value of name. name is a character vector.
See Also
Rhttpd.
Examples
s <- Rhttpd$new()
s$add(RhttpdApp$new(
name='summary',
app=system.file('exampleApps/summary.R',package='Rook')
))
## Not run:
s$start(quiet=TRUE)
s$browse(1)
## End(Not run)
s$remove(all=TRUE)
# Stops the server but doesn't uninstall the app
## Not run:
s$stop()
## End(Not run)
s$remove(all=TRUE)
rm(s)
RhttpdErrorStream-class
Class RhttpdErrorStream
Description
An internal class used by Rhttpd.
Examples
showClass("RhttpdErrorStream")
RhttpdInputStream-class
Class RhttpdInputStream
Description
An internal class used by Rhttpd.
Examples
showClass("RhttpdInputStream")
Server Rook Server Object
Description
Server is an object exported by Rook that has no value to the user. It is mainly used by web servers
for their convenience. To see an example of how it may be used, see rApache.R in the inst/servers
directory.
Static-class Class Static
Description
A Middleware class for serving static files from a root directory given a set of url paths.
Methods
new(urls, root): Creates a new object. urls is a character vector whose elements must start with
a '/'. root is a length 1 character vector whose value must be a valid directory.
See Also
See Builder for an example.
suspend_console Suspend the R console
Description
Calls Sys.sleep in a never-ending while loop to mimic suspension of the R console.
Usage
suspend_console()
Value
No value is ever returned.
See Also
Rook.
URLMap-class Class URLMap
Description
A Rook application that maps url paths to other Rook applications.
Methods
new(...): Creates a Rook application. All arguments must be Rook applications and named as in
the example.
See Also
Rhttpd.
Examples
s <- Rhttpd$new()
s$add(
name="pingpong",
app=Rook::URLMap$new(
'/ping' = function(env){
req <- Rook::Request$new(env)
res <- Rook::Response$new()
res$write(sprintf('<h1><a href="%s">Pong</a></h1>',req$to_url("/pong")))
res$finish()
},
'/pong' = function(env){
req <- Rook::Request$new(env)
res <- Rook::Response$new()
res$write(sprintf('<h1><a href="%s">Ping</a></h1>',req$to_url("/ping")))
res$finish()
},
'/?' = function(env){
req <- Rook::Request$new(env)
res <- Rook::Response$new()
res$redirect(req$to_url('/pong'))
res$finish()
}
)
)
## Not run:
s$start(quiet=TRUE)
s$browse('pingpong')
## End(Not run)
s$remove('pingpong')
## Not run:
s$stop()
## End(Not run)
rm(s)
Utils-class Class Utils
Description
A convenience object for working with various aspects of web requests and responses.
Methods
bytesize(string=NULL): Returns size in bytes for string, a character vector.
unescape(s=NULL): returns the url decoded value of the character vector s. Also replaces the '+'
character with a space.
status_code(status=NULL): returns integer value for the given HTTP status, which can either
be numeric or or a character vector describing the status. Returns as.integer(500) if status
is NULL.
escape_html(string=NULL): replaces "&", "<", ">", "'", and '"' with entity equivalents.
raw.match(needle=NULL, haystack=NULL, all=TRUE): returns index position of needle in haystack.
All matched indexes are returned by default. needle is either a raw vector or character string.
haystack is a raw vector.
parse_query(qs=NULL, d=DEFAULT_SEP): Creates a named list from the the query string qs. d is
the separator value and defaults to '[&;] *'.
rfc2822(ts=NULL): Formats ts in RFC2822 time. ts must be a POSIXt object.
escape(s=NULL): Transforms any non-printable characters found in s to their percent-encoded
equivalents.
build_query(params=NULL): Creates a query string from the named list given in params.
timezero(): Returns a POSIXct object set to UNIX epoch.
set_cookie_header(header, key, value, expires, path, domain, secure, httpOnly): Sets
an HTTP cookie header in the environment header. All arguments except expires are length
1 character vectors, while expires must be a POSIXct object.
delete_cookie_header(header, key, value, expires, path, domain, secure, httpOnly):
Deletes the HTTP cookie header.
See Also
Multipart.
22 Utils-class
Examples
Utils$bytesize('foo')
Utils$escape('foo bar')
Utils$unescape('foo+bar')
Utils$escape_html('foo <bar>')
Utils$escape('foo <bar>')
Utils$escape('foo\n<bar>')
Utils$status_code('OK')
Utils$status_code('Found')
Utils$status_code('Not Found')
x <- Utils$parse_query('foo=1&bar=baz')
x
Utils$rfc2822(Sys.time())
Utils$timezero()
Utils$build_query(x)
rm(x) |
chartkick | ruby | Ruby | Chartkick
===
Create beautiful JavaScript charts with one line of Ruby. No more fighting with charting libraries!
[See it in action](https://chartkick.com)
**Chartkick 5.0 was recently released** - see [how to upgrade](#upgrading)
:fire: For admin charts and dashboards, check out [Blazer](https://github.com/ankane/blazer/), and for advanced visualizations, check out [Vega](https://github.com/ankane/vega)
:two_hearts: A perfect companion to [Groupdate](https://github.com/ankane/groupdate), [Hightop](https://github.com/ankane/hightop), and [ActiveMedian](https://github.com/ankane/active_median)
[](https://github.com/ankane/chartkick/actions)
Quick Start
---
Add this line to your application’s Gemfile:
```
gem "chartkick"
```
Then follow the instructions for your JavaScript setup:
* [Importmap](#importmap) (Rails 7 default)
* [esbuild, rollup.js, or Webpack](#esbuild-rollupjs-or-webpack)
* [Webpacker](#webpacker) (Rails 6 default)
* [Sprockets](#sprockets)
This sets up Chartkick with [Chart.js](https://www.chartjs.org/). For other charting libraries and frameworks, see [these instructions](#additional-charting-libraries).
### Importmap
In `config/importmap.rb`, add:
```
pin "chartkick", to: "chartkick.js"
pin "Chart.bundle", to: "Chart.bundle.js"
```
And in `app/javascript/application.js`, add:
```
import "chartkick"
import "Chart.bundle"
```
### esbuild, rollup.js, or Webpack
Run:
```
yarn add chartkick chart.js
```
And in `app/javascript/application.js`, add:
```
import "chartkick/chart.js"
```
Note: For rollup.js, this requires `format: "iife"` in `rollup.config.js`.
### Webpacker
Run:
```
yarn add chartkick chart.js
```
And in `app/javascript/packs/application.js`, add:
```
import "chartkick/chart.js"
```
### Sprockets
In `app/assets/javascripts/application.js`, add:
```
//= require chartkick
//= require Chart.bundle
```
Charts
---
Line chart
```
<%= line_chart User.group_by_day(:created_at).count %>
```
Pie chart
```
<%= pie_chart Goal.group(:name).count %>
```
Column chart
```
<%= column_chart Task.group_by_hour_of_day(:created_at, format: "%l %P").count %>
```
Bar chart
```
<%= bar_chart Shirt.group(:size).sum(:price) %>
```
Area chart
```
<%= area_chart Visit.group_by_minute(:created_at).maximum(:load_time) %>
```
Scatter chart
```
<%= scatter_chart City.pluck(:size, :population) %>
```
Geo chart - *Google Charts*
```
<%= geo_chart Medal.group(:country).count %>
```
Timeline - *Google Charts*
```
<%= timeline [
["Washington", "1789-04-29", "1797-03-03"],
["Adams", "1797-03-03", "1801-03-03"],
["Jefferson", "1801-03-03", "1809-03-03"]
] %>
```
Multiple series
```
<%= line_chart [
{name: "Workout", data: {"2021-01-01" => 3, "2021-01-02" => 4}},
{name: "Call parents", data: {"2021-01-01" => 5, "2021-01-02" => 3}}
] %>
```
or
```
<%= line_chart Feat.group(:goal_id).group_by_week(:created_at).count %>
```
Data
---
Data can be a hash, array, or URL.
#### Hash
```
<%= line_chart({"2021-01-01" => 2, "2021-01-02" => 3}) %>
```
#### Array
```
<%= line_chart [["2021-01-01", 2], ["2021-01-02", 3]] %>
```
#### URL
Make your pages load super fast and stop worrying about timeouts. Give each chart its own endpoint.
```
<%= line_chart completed_tasks_charts_path %>
```
And in your controller, pass the data as JSON.
```
class ChartsController < ApplicationController
def completed_tasks
render json: Task.group_by_day(:completed_at).count
end end
```
For multiple series, add `chart_json` at the end.
```
render json: Task.group(:goal_id).group_by_day(:completed_at).count.chart_json
```
Options
---
Id, width, and height
```
<%= line_chart data, id: "users-chart", width: "800px", height: "500px" %>
```
Min and max values
```
<%= line_chart data, min: 1000, max: 5000 %>
```
`min` defaults to 0 for charts with non-negative values. Use `nil` to let the charting library decide.
Min and max for x-axis - *Chart.js*
```
<%= line_chart data, xmin: "2021-01-01", xmax: "2022-01-01" %>
```
Colors
```
<%= line_chart data, colors: ["#b00", "#666"] %>
```
Stacked columns or bars
```
<%= column_chart data, stacked: true %>
```
Discrete axis
```
<%= line_chart data, discrete: true %>
```
Label (for single series)
```
<%= line_chart data, label: "Value" %>
```
Axis titles
```
<%= line_chart data, xtitle: "Time", ytitle: "Population" %>
```
Straight lines between points instead of a curve
```
<%= line_chart data, curve: false %>
```
Hide points
```
<%= line_chart data, points: false %>
```
Show or hide legend
```
<%= line_chart data, legend: false %>
```
Specify legend position
```
<%= line_chart data, legend: "bottom" %>
```
Donut chart
```
<%= pie_chart data, donut: true %>
```
Prefix, useful for currency - *Chart.js, Highcharts*
```
<%= line_chart data, prefix: "$" %>
```
Suffix, useful for percentages - *Chart.js, Highcharts*
```
<%= line_chart data, suffix: "%" %>
```
Set a thousands separator - *Chart.js, Highcharts*
```
<%= line_chart data, thousands: "," %>
```
Set a decimal separator - *Chart.js, Highcharts*
```
<%= line_chart data, decimal: "," %>
```
Set significant digits - *Chart.js, Highcharts*
```
<%= line_chart data, precision: 3 %>
```
Set rounding - *Chart.js, Highcharts*
```
<%= line_chart data, round: 2 %>
```
Show insignificant zeros, useful for currency - *Chart.js, Highcharts*
```
<%= line_chart data, round: 2, zeros: true %>
```
Friendly byte sizes - *Chart.js*
```
<%= line_chart data, bytes: true %>
```
Specify the message when data is loading
```
<%= line_chart data, loading: "Loading..." %>
```
Specify the message when data is empty
```
<%= line_chart data, empty: "No data" %>
```
Refresh data from a remote source every `n` seconds
```
<%= line_chart url, refresh: 60 %>
```
You can pass options directly to the charting library with:
```
<%= line_chart data, library: {backgroundColor: "#eee"} %>
```
See the documentation for [Chart.js](https://www.chartjs.org/docs/), [Google Charts](https://developers.google.com/chart/interactive/docs/gallery), and [Highcharts](https://api.highcharts.com/highcharts) for more info.
To customize datasets in Chart.js, use:
```
<%= line_chart data, dataset: {borderWidth: 10} %>
```
You can pass this option to individual series as well.
### Global Options
To set options for all of your charts, create an initializer `config/initializers/chartkick.rb` with:
```
[Chartkick](/gems/chartkick/Chartkick "Chartkick (module)").[options](/gems/chartkick/Chartkick#options-class_method "Chartkick.options (method)") = {
height: "400px",
colors: ["#b00", "#666"]
}
```
Customize the html
```
[Chartkick](/gems/chartkick/Chartkick "Chartkick (module)").[options](/gems/chartkick/Chartkick#options-class_method "Chartkick.options (method)")[:html] = '<div id="%{id}" style="height: %{height};">%{loading}</div>'
```
You capture the JavaScript in a content block with:
```
[Chartkick](/gems/chartkick/Chartkick "Chartkick (module)").[options](/gems/chartkick/Chartkick#options-class_method "Chartkick.options (method)")[:content_for] = :charts_js
```
Then, in your layout, use:
```
<%= yield :charts_js %>
```
For Padrino, use `yield_content` instead of `yield`.
This is great for including all of your JavaScript at the bottom of the page.
### Multiple Series
You can pass a few options with a series:
* `name`
* `data`
* `color`
* `dataset` - *Chart.js only*
* `points` - *Chart.js only*
* `curve` - *Chart.js only*
### Code
If you want to use the charting library directly, get the code with:
```
<%= line_chart data, code: true %>
```
The code will be logged to the JavaScript console. JavaScript functions cannot be logged, so it may not be identical.
### Download Charts
*Chart.js only*
Give users the ability to download charts. It all happens in the browser - no server-side code needed.
```
<%= line_chart data, download: true %>
```
Safari will open the image in a new window instead of downloading.
Set the filename
```
<%= line_chart data, download: {filename: "boom"} %>
```
Set the background color
```
<%= line_chart data, download: {background: "#ffffff"} %>
```
Set title
```
<%= line_chart data, title: "Awesome chart" %>
```
Additional Charting Libraries
---
* [Google Charts](#google-charts)
* [Highcharts](#highcharts)
### Google Charts
In your layout or views, add:
```
<%= javascript_include_tag "https://www.gstatic.com/charts/loader.js" %>
```
For Importmap (Rails 7 default), in `config/importmap.rb`, add:
```
pin "chartkick", to: "chartkick.js"
```
And in `app/javascript/application.js`, add:
```
import "chartkick"
```
For Webpacker (Rails 6 default), run:
```
yarn add chartkick
```
And in `app/javascript/packs/application.js`, add:
```
import "chartkick"
```
For Sprockets, in `app/assets/javascripts/application.js`, add:
```
//= require chartkick
```
To specify a language or Google Maps API key, use:
```
Chartkick.configure({language: "de", mapsApiKey: "..."})
```
before your charts.
### Highcharts
For Importmap (Rails 7 default), run:
```
bin/importmap pin highcharts --download
```
And in `config/importmap.rb`, add:
```
pin "chartkick", to: "chartkick.js"
```
And in `app/javascript/application.js`, add:
```
import "chartkick"
import Highcharts from "highcharts"
window.Highcharts = Highcharts
```
For Webpacker (Rails 6 default), run:
```
yarn add chartkick highcharts
```
And in `app/javascript/packs/application.js`, add:
```
import "chartkick/highcharts"
```
For Sprockets, download [highcharts.js](https://code.highcharts.com/highcharts.js) into `vendor/assets/javascripts` (or use `yarn add highcharts` in Rails 5.1+), and in `app/assets/javascripts/application.js`, add:
```
//= require chartkick
//= require highcharts
```
### Multiple Libraries
If more than one charting library is loaded, choose between them with:
```
<%= line_chart data, adapter: "google" %> <!-- or highcharts or chartjs -->
```
Sinatra and Padrino
---
Download [chartkick.js](https://raw.githubusercontent.com/ankane/chartkick/master/vendor/assets/javascripts/chartkick.js) and include it manually.
```
<script src="chartkick.js"></script>
```
Then include the charting library.
Chart.js - download [Chart.js](https://unpkg.com/chart.js@4/dist/chart.umd.js) and the [date-fns adapter bundle](https://unpkg.com/chartjs-adapter-date-fns@3/dist/chartjs-adapter-date-fns.bundle.js)
```
<script src="chart.js"></script>
<script src="chartjs-adapter-date-fns.bundle.js"></script>
```
Google Charts
```
<script src="https://www.gstatic.com/charts/loader.js"></script>
```
Highcharts - download [highcharts.js](https://code.highcharts.com/highcharts.js)
```
<script src="highcharts.js"></script>
```
JavaScript API
---
Access a chart with:
```
var chart = Chartkick.charts["chart-id"]
```
Get the underlying chart object with:
```
chart.getChartObject()
```
You can also use:
```
chart.getElement()
chart.getData()
chart.getOptions()
chart.getAdapter()
```
Update the data with:
```
chart.updateData(newData)
```
You can also specify new options:
```
chart.setOptions(newOptions)
// or chart.updateData(newData, newOptions)
```
Refresh the data from a remote source:
```
chart.refreshData()
```
Redraw the chart with:
```
chart.redraw()
```
Destroy the chart with:
```
chart.destroy()
```
Loop over charts with:
```
Chartkick.eachChart(function (chart) {
// do something
})
```
Content Security Policy (CSP)
---
Check out [how to configure CSP](https://github.com/ankane/chartkick/blob/master/guides/Content-Security-Policy.md)
Tutorials
---
* [Charts with Chartkick and Groupdate](https://gorails.com/episodes/charts-with-chartkick-and-groupdate)
* [Creando gráficos en Ruby on Rails con Chartkick y Chart.js](https://www.youtube.com/watch?v=W92AlkwQn3M)
* [Make Easy Graphs and Charts on Rails with Chartkick](https://www.sitepoint.com/make-easy-graphs-and-charts-on-rails-with-chartkick/)
* [Practical Graphs on Rails: Chartkick in Practice](https://www.sitepoint.com/graphs-on-rails-chartkick-in-practice/)
Upgrading
---
### 5.0
If you use Importmap or Sprockets, update the gem and you’re good to go!
If you use esbuild, Webpack, or Webpacker, run:
```
yarn upgrade chartkick --latest
```
If you use Chart.js with esbuild, Webpack, or Webpacker, also run:
```
yarn upgrade chart.js --latest
```
History
---
View the [changelog](https://github.com/ankane/chartkick/blob/master/CHANGELOG.md)
Contributing
---
Everyone is encouraged to help improve this project. Here are a few ways you can help:
* [Report bugs](https://github.com/ankane/chartkick/issues)
* Fix bugs and [submit pull requests](https://github.com/ankane/chartkick/pulls)
* Write, clarify, or fix documentation
* Suggest or add new features
To get started with development:
```
git clone https://github.com/ankane/chartkick.git cd chartkick bundle install bundle exec rake test
``` |
github.com/iskaj/go-astisub | go | Go | README
[¶](#section-readme)
---
[](http://goreportcard.com/report/github.com/asticode/go-astisub)
[](https://godoc.org/github.com/asticode/go-astisub)
[](https://github.com/asticode/go-astisub/actions/workflows/test.yml)
[](https://coveralls.io/github/asticode/go-astisub)
This is a Golang library to manipulate subtitles.
It allows you to manipulate `srt`, `stl`, `ttml`, `ssa/ass`, `webvtt` and `teletext` files for now.
Available operations are `parsing`, `writing`, `applying linear correction`, `syncing`, `fragmenting`, `unfragmenting`, `merging` and `optimizing`.
### Installation
To install the library:
```
go get github.com/asticode/go-astisub
```
To install the CLI:
```
go install github.com/asticode/go-astisub/astisub
```
### Using the library in your code
WARNING: the code below doesn't handle errors for readibility purposes. However you SHOULD!
```
// Open subtitles s1, _ := astisub.OpenFile("/path/to/example.ttml")
s2, _ := astisub.ReadFromSRT(bytes.NewReader([]byte("1\n00:01:00.000 --> 00:02:00.000\nCredits")))
// Add a duration to every subtitles (syncing)
s1.Add(-2*time.Second)
// Fragment the subtitles s1.Fragment(2*time.Second)
// Merge subtitles s1.Merge(s2)
// Optimize subtitles s1.Optimize()
// Unfragment the subtitles s1.Unfragment()
// Apply linear correction s1.ApplyLinearCorrection(1*time.Second, 2*time.Second, 5*time.Second, 7*time.Second)
// Write subtitles s1.Write("/path/to/example.srt")
var buf = &bytes.Buffer{}
s2.WriteToTTML(buf)
```
### Using the CLI
If **astisub** has been installed properly you can:
* convert any type of subtitle to any other type of subtitle:
```
astisub convert -i example.srt -o example.ttml
```
* apply linear correction to any type of subtitle:
```
astisub apply-linear-correction -i example.srt -a1 1s -d1 2s -a2 5s -d2 7s -o example.out.srt
```
* fragment any type of subtitle:
```
astisub fragment -i example.srt -f 2s -o example.out.srt
```
* merge any type of subtitle into any other type of subtitle:
```
astisub merge -i example.srt -i example.ttml -o example.out.srt
```
* optimize any type of subtitle:
```
astisub optimize -i example.srt -o example.out.srt
```
* unfragment any type of subtitle:
```
astisub unfragment -i example.srt -o example.out.srt
```
* sync any type of subtitle:
```
astisub sync -i example.srt -s "-2s" -o example.out.srt
```
### Features and roadmap
* parsing
* writing
* syncing
* fragmenting/unfragmenting
* merging
* ordering
* optimizing
* linear correction
* .srt
* .ttml
* .vtt
* .stl
* .ssa/.ass
* .teletext
* .smi
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [type Color](#Color)
* + [func (c *Color) SSAString() string](#Color.SSAString)
+ [func (c *Color) TTMLString() string](#Color.TTMLString)
* [type Item](#Item)
* + [func (i Item) String() string](#Item.String)
* [type Justification](#Justification)
* [type Line](#Line)
* + [func (l Line) String() string](#Line.String)
* [type LineItem](#LineItem)
* + [func (li LineItem) STLString() string](#LineItem.STLString)
* [type Metadata](#Metadata)
* [type Options](#Options)
* [type Region](#Region)
* [type SSAOptions](#SSAOptions)
* [type STLOptions](#STLOptions)
* [type STLPosition](#STLPosition)
* [type Style](#Style)
* [type StyleAttributes](#StyleAttributes)
* [type Subtitles](#Subtitles)
* + [func NewSubtitles() *Subtitles](#NewSubtitles)
+ [func Open(o Options) (s *Subtitles, err error)](#Open)
+ [func OpenFile(filename string) (*Subtitles, error)](#OpenFile)
+ [func ReadFromSRT(i io.Reader) (o *Subtitles, err error)](#ReadFromSRT)
+ [func ReadFromSSA(i io.Reader) (o *Subtitles, err error)](#ReadFromSSA)
+ [func ReadFromSSAWithOptions(i io.Reader, opts SSAOptions) (o *Subtitles, err error)](#ReadFromSSAWithOptions)
+ [func ReadFromSTL(i io.Reader, opts STLOptions) (o *Subtitles, err error)](#ReadFromSTL)
+ [func ReadFromTTML(i io.Reader) (o *Subtitles, err error)](#ReadFromTTML)
+ [func ReadFromTeletext(r io.Reader, o TeletextOptions) (s *Subtitles, err error)](#ReadFromTeletext)
+ [func ReadFromWebVTT(i io.Reader) (o *Subtitles, err error)](#ReadFromWebVTT)
* + [func (s *Subtitles) Add(d time.Duration)](#Subtitles.Add)
+ [func (s *Subtitles) ApplyLinearCorrection(actual1, desired1, actual2, desired2 time.Duration)](#Subtitles.ApplyLinearCorrection)
+ [func (s Subtitles) Duration() time.Duration](#Subtitles.Duration)
+ [func (s *Subtitles) ForceDuration(d time.Duration, addDummyItem bool)](#Subtitles.ForceDuration)
+ [func (s *Subtitles) Fragment(f time.Duration)](#Subtitles.Fragment)
+ [func (s Subtitles) IsEmpty() bool](#Subtitles.IsEmpty)
+ [func (s *Subtitles) Merge(i *Subtitles)](#Subtitles.Merge)
+ [func (s *Subtitles) Optimize()](#Subtitles.Optimize)
+ [func (s *Subtitles) Order()](#Subtitles.Order)
+ [func (s *Subtitles) RemoveStyling()](#Subtitles.RemoveStyling)
+ [func (s *Subtitles) Unfragment()](#Subtitles.Unfragment)
+ [func (s Subtitles) Write(dst string) (err error)](#Subtitles.Write)
+ [func (s Subtitles) WriteToSRT(o io.Writer) (err error)](#Subtitles.WriteToSRT)
+ [func (s Subtitles) WriteToSSA(o io.Writer) (err error)](#Subtitles.WriteToSSA)
+ [func (s Subtitles) WriteToSTL(o io.Writer) (err error)](#Subtitles.WriteToSTL)
+ [func (s Subtitles) WriteToTTML(o io.Writer) (err error)](#Subtitles.WriteToTTML)
+ [func (s Subtitles) WriteToWebVTT(o io.Writer) (err error)](#Subtitles.WriteToWebVTT)
* [type TTMLIn](#TTMLIn)
* [type TTMLInDuration](#TTMLInDuration)
* + [func (d *TTMLInDuration) UnmarshalText(i []byte) (err error)](#TTMLInDuration.UnmarshalText)
* [type TTMLInHeader](#TTMLInHeader)
* [type TTMLInItem](#TTMLInItem)
* [type TTMLInItems](#TTMLInItems)
* + [func (i *TTMLInItems) UnmarshalXML(d *xml.Decoder, start xml.StartElement) (err error)](#TTMLInItems.UnmarshalXML)
* [type TTMLInMetadata](#TTMLInMetadata)
* [type TTMLInRegion](#TTMLInRegion)
* [type TTMLInStyle](#TTMLInStyle)
* [type TTMLInStyleAttributes](#TTMLInStyleAttributes)
* [type TTMLInSubtitle](#TTMLInSubtitle)
* [type TTMLOut](#TTMLOut)
* [type TTMLOutDuration](#TTMLOutDuration)
* + [func (t TTMLOutDuration) MarshalText() ([]byte, error)](#TTMLOutDuration.MarshalText)
* [type TTMLOutHeader](#TTMLOutHeader)
* [type TTMLOutItem](#TTMLOutItem)
* [type TTMLOutMetadata](#TTMLOutMetadata)
* [type TTMLOutRegion](#TTMLOutRegion)
* [type TTMLOutStyle](#TTMLOutStyle)
* [type TTMLOutStyleAttributes](#TTMLOutStyleAttributes)
* [type TTMLOutSubtitle](#TTMLOutSubtitle)
* [type TeletextOptions](#TeletextOptions)
### Constants [¶](#pkg-constants)
```
const (
LanguageChinese = "chinese"
LanguageEnglish = "english"
LanguageFrench = "french"
LanguageJapanese = "japanese"
LanguageNorwegian = "norwegian"
)
```
Languages
### Variables [¶](#pkg-variables)
```
var (
ColorBlack = &[Color](#Color){}
ColorBlue = &[Color](#Color){Blue: 255}
ColorCyan = &[Color](#Color){Blue: 255, Green: 255}
ColorGray = &[Color](#Color){Blue: 128, Green: 128, Red: 128}
ColorGreen = &[Color](#Color){Green: 128}
ColorLime = &[Color](#Color){Green: 255}
ColorMagenta = &[Color](#Color){Blue: 255, Red: 255}
ColorMaroon = &[Color](#Color){Red: 128}
ColorNavy = &[Color](#Color){Blue: 128}
ColorOlive = &[Color](#Color){Green: 128, Red: 128}
ColorPurple = &[Color](#Color){Blue: 128, Red: 128}
ColorRed = &[Color](#Color){Red: 255}
ColorSilver = &[Color](#Color){Blue: 192, Green: 192, Red: 192}
ColorTeal = &[Color](#Color){Blue: 128, Green: 128}
ColorYellow = &[Color](#Color){Green: 255, Red: 255}
ColorWhite = &[Color](#Color){Blue: 255, Green: 255, Red: 255}
)
```
Colors
```
var (
ErrInvalidExtension = [errors](/errors).[New](/errors#New)("astisub: invalid extension")
ErrNoSubtitlesToWrite = [errors](/errors).[New](/errors#New)("astisub: no subtitles to write")
)
```
Errors
```
var (
JustificationUnchanged = [Justification](#Justification)(1)
JustificationLeft = [Justification](#Justification)(2)
JustificationCentered = [Justification](#Justification)(3)
JustificationRight = [Justification](#Justification)(4)
)
```
```
var (
BytesBOM = [][byte](/builtin#byte){239, 187, 191}
)
```
Bytes
```
var (
ErrNoValidTeletextPID = [errors](/errors).[New](/errors#New)("astisub: no valid teletext PID")
)
```
Errors
```
var Now = func() [time](/time).[Time](/time#Time) {
return [time](/time).[Now](/time#Now)()
}
```
Now allows testing functions using it
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [Color](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L135) [¶](#Color)
```
type Color struct {
Alpha, Blue, Green, Red [uint8](/builtin#uint8)
}
```
Color represents a color
####
func (*Color) [SSAString](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L156) [¶](#Color.SSAString)
```
func (c *[Color](#Color)) SSAString() [string](/builtin#string)
```
SSAString expresses the color as an SSA string
####
func (*Color) [TTMLString](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L161) [¶](#Color.TTMLString)
```
func (c *[Color](#Color)) TTMLString() [string](/builtin#string)
```
TTMLString expresses the color as a TTML string
####
type [Item](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L114) [¶](#Item)
```
type Item struct {
Comments [][string](/builtin#string)
Index [int](/builtin#int)
EndAt [time](/time).[Duration](/time#Duration)
InlineStyle *[StyleAttributes](#StyleAttributes)
Lines [][Line](#Line)
Region *[Region](#Region)
StartAt [time](/time).[Duration](/time#Duration)
Style *[Style](#Style)
}
```
Item represents a text to show between 2 time boundaries with formatting
####
func (Item) [String](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L126) [¶](#Item.String)
```
func (i [Item](#Item)) String() [string](/builtin#string)
```
String implements the Stringer interface
####
type [Justification](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L165) [¶](#Justification)
added in v0.12.0
```
type Justification [int](/builtin#int)
```
####
type [Line](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L381) [¶](#Line)
```
type Line struct {
Items [][LineItem](#LineItem)
VoiceName [string](/builtin#string)
}
```
Line represents a set of formatted line items
####
func (Line) [String](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L387) [¶](#Line.String)
```
func (l [Line](#Line)) String() [string](/builtin#string)
```
String implement the Stringer interface
####
type [LineItem](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L396) [¶](#LineItem)
```
type LineItem struct {
InlineStyle *[StyleAttributes](#StyleAttributes)
Style *[Style](#Style)
Text [string](/builtin#string)
}
```
LineItem represents a formatted line item
####
func (LineItem) [STLString](https://github.com/asticode/go-astisub/blob/v0.26.0/stl.go#L723) [¶](#LineItem.STLString)
added in v0.12.0
```
func (li [LineItem](#LineItem)) STLString() [string](/builtin#string)
```
####
type [Metadata](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L328) [¶](#Metadata)
```
type Metadata struct {
Comments [][string](/builtin#string)
Framerate [int](/builtin#int)
Language [string](/builtin#string)
SSACollisions [string](/builtin#string)
SSAOriginalEditing [string](/builtin#string)
SSAOriginalScript [string](/builtin#string)
SSAOriginalTiming [string](/builtin#string)
SSAOriginalTranslation [string](/builtin#string)
SSAPlayDepth *[int](/builtin#int)
SSAPlayResX, SSAPlayResY *[int](/builtin#int)
SSAScriptType [string](/builtin#string)
SSAScriptUpdatedBy [string](/builtin#string)
SSASynchPoint [string](/builtin#string)
SSATimer *[float64](/builtin#float64)
SSAUpdateDetails [string](/builtin#string)
SSAWrapStyle [string](/builtin#string)
STLCountryOfOrigin [string](/builtin#string)
STLCreationDate *[time](/time).[Time](/time#Time)
STLDisplayStandardCode [string](/builtin#string)
STLEditorContactDetails [string](/builtin#string)
STLEditorName [string](/builtin#string)
STLMaximumNumberOfDisplayableCharactersInAnyTextRow *[int](/builtin#int)
STLMaximumNumberOfDisplayableRows *[int](/builtin#int)
STLOriginalEpisodeTitle [string](/builtin#string)
STLPublisher [string](/builtin#string)
STLRevisionDate *[time](/time).[Time](/time#Time)
STLRevisionNumber [int](/builtin#int)
STLSubtitleListReferenceCode [string](/builtin#string)
STLTimecodeStartOfProgramme [time](/time).[Duration](/time#Duration)
STLTranslatedEpisodeTitle [string](/builtin#string)
STLTranslatedProgramTitle [string](/builtin#string)
STLTranslatorContactDetails [string](/builtin#string)
STLTranslatorName [string](/builtin#string)
Title [string](/builtin#string)
TTMLCopyright [string](/builtin#string)
}
```
Metadata represents metadata TODO Merge attributes
####
type [Options](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L56) [¶](#Options)
```
type Options struct {
Filename [string](/builtin#string)
Teletext [TeletextOptions](#TeletextOptions)
STL [STLOptions](#STLOptions)
}
```
Options represents open or write options
####
type [Region](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L367) [¶](#Region)
```
type Region struct {
ID [string](/builtin#string)
InlineStyle *[StyleAttributes](#StyleAttributes)
Style *[Style](#Style)
}
```
Region represents a subtitle's region
####
type [SSAOptions](https://github.com/asticode/go-astisub/blob/v0.26.0/ssa.go#L1285) [¶](#SSAOptions)
added in v0.20.0
```
type SSAOptions struct {
OnUnknownSectionName func(name [string](/builtin#string))
OnInvalidLine func(line [string](/builtin#string))
}
```
SSAOptions
####
type [STLOptions](https://github.com/asticode/go-astisub/blob/v0.26.0/stl.go#L180) [¶](#STLOptions)
added in v0.15.0
```
type STLOptions struct {
// IgnoreTimecodeStartOfProgramme - set STLTimecodeStartOfProgramme to zero before parsing
IgnoreTimecodeStartOfProgramme [bool](/builtin#bool)
}
```
STLOptions represents STL parsing options
####
type [STLPosition](https://github.com/asticode/go-astisub/blob/v0.26.0/stl.go#L173) [¶](#STLPosition)
added in v0.12.0
```
type STLPosition struct {
VerticalPosition [int](/builtin#int)
MaxRows [int](/builtin#int)
Rows [int](/builtin#int)
}
```
####
type [Style](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L374) [¶](#Style)
```
type Style struct {
ID [string](/builtin#string)
InlineStyle *[StyleAttributes](#StyleAttributes)
Style *[Style](#Style)
}
```
Style represents a subtitle's style
####
type [StyleAttributes](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L175) [¶](#StyleAttributes)
```
type StyleAttributes struct {
SSAAlignment *[int](/builtin#int)
SSAAlphaLevel *[float64](/builtin#float64)
SSAAngle *[float64](/builtin#float64) // degrees
SSABackColour *[Color](#Color)
SSABold *[bool](/builtin#bool)
SSABorderStyle *[int](/builtin#int)
SSAEffect [string](/builtin#string)
SSAEncoding *[int](/builtin#int)
SSAFontName [string](/builtin#string)
SSAFontSize *[float64](/builtin#float64)
SSAItalic *[bool](/builtin#bool)
SSALayer *[int](/builtin#int)
SSAMarginLeft *[int](/builtin#int) // pixels
SSAMarginRight *[int](/builtin#int) // pixels
SSAMarginVertical *[int](/builtin#int) // pixels
SSAMarked *[bool](/builtin#bool)
SSAOutline *[float64](/builtin#float64) // pixels
SSAOutlineColour *[Color](#Color)
SSAPrimaryColour *[Color](#Color)
SSAScaleX *[float64](/builtin#float64) // %
SSAScaleY *[float64](/builtin#float64) // %
SSASecondaryColour *[Color](#Color)
SSAShadow *[float64](/builtin#float64) // pixels
SSASpacing *[float64](/builtin#float64) // pixels
SSAStrikeout *[bool](/builtin#bool)
SSAUnderline *[bool](/builtin#bool)
STLBoxing *[bool](/builtin#bool)
STLItalics *[bool](/builtin#bool)
STLJustification *[Justification](#Justification)
STLPosition *[STLPosition](#STLPosition)
STLUnderline *[bool](/builtin#bool)
TeletextColor *[Color](#Color)
TeletextDoubleHeight *[bool](/builtin#bool)
TeletextDoubleSize *[bool](/builtin#bool)
TeletextDoubleWidth *[bool](/builtin#bool)
TeletextSpacesAfter *[int](/builtin#int)
TeletextSpacesBefore *[int](/builtin#int)
// TODO Use pointers with real types below
TTMLBackgroundColor *[string](/builtin#string) // <https://htmlcolorcodes.com/fr/>
TTMLColor *[string](/builtin#string)
TTMLDirection *[string](/builtin#string)
TTMLDisplay *[string](/builtin#string)
TTMLDisplayAlign *[string](/builtin#string)
TTMLExtent *[string](/builtin#string)
TTMLFontFamily *[string](/builtin#string)
TTMLFontSize *[string](/builtin#string)
TTMLFontStyle *[string](/builtin#string)
TTMLFontWeight *[string](/builtin#string)
TTMLLineHeight *[string](/builtin#string)
TTMLOpacity *[string](/builtin#string)
TTMLOrigin *[string](/builtin#string)
TTMLOverflow *[string](/builtin#string)
TTMLPadding *[string](/builtin#string)
TTMLShowBackground *[string](/builtin#string)
TTMLTextAlign *[string](/builtin#string)
TTMLTextDecoration *[string](/builtin#string)
TTMLTextOutline *[string](/builtin#string)
TTMLUnicodeBidi *[string](/builtin#string)
TTMLVisibility *[string](/builtin#string)
TTMLWrapOption *[string](/builtin#string)
TTMLWritingMode *[string](/builtin#string)
TTMLZIndex *[int](/builtin#int)
WebVTTAlign [string](/builtin#string)
WebVTTItalics [bool](/builtin#bool)
WebVTTLine [string](/builtin#string)
WebVTTLines [int](/builtin#int)
WebVTTPosition [string](/builtin#string)
WebVTTRegionAnchor [string](/builtin#string)
WebVTTScroll [string](/builtin#string)
WebVTTSize [string](/builtin#string)
WebVTTVertical [string](/builtin#string)
WebVTTViewportAnchor [string](/builtin#string)
WebVTTWidth [string](/builtin#string)
}
```
StyleAttributes represents style attributes
####
type [Subtitles](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L98) [¶](#Subtitles)
```
type Subtitles struct {
Items []*[Item](#Item)
Metadata *[Metadata](#Metadata)
Regions map[[string](/builtin#string)]*[Region](#Region)
Styles map[[string](/builtin#string)]*[Style](#Style)
}
```
Subtitles represents an ordered list of items with formatting
####
func [NewSubtitles](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L106) [¶](#NewSubtitles)
```
func NewSubtitles() *[Subtitles](#Subtitles)
```
NewSubtitles creates new subtitles
####
func [Open](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L63) [¶](#Open)
```
func Open(o [Options](#Options)) (s *[Subtitles](#Subtitles), err [error](/builtin#error))
```
Open opens a subtitle reader based on options
####
func [OpenFile](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L93) [¶](#OpenFile)
```
func OpenFile(filename [string](/builtin#string)) (*[Subtitles](#Subtitles), [error](/builtin#error))
```
OpenFile opens a file regardless of other options
####
func [ReadFromSRT](https://github.com/asticode/go-astisub/blob/v0.26.0/srt.go#L33) [¶](#ReadFromSRT)
```
func ReadFromSRT(i [io](/io).[Reader](/io#Reader)) (o *[Subtitles](#Subtitles), err [error](/builtin#error))
```
ReadFromSRT parses an .srt content
####
func [ReadFromSSA](https://github.com/asticode/go-astisub/blob/v0.26.0/ssa.go#L136) [¶](#ReadFromSSA)
```
func ReadFromSSA(i [io](/io).[Reader](/io#Reader)) (o *[Subtitles](#Subtitles), err [error](/builtin#error))
```
ReadFromSSA parses an .ssa content
####
func [ReadFromSSAWithOptions](https://github.com/asticode/go-astisub/blob/v0.26.0/ssa.go#L142) [¶](#ReadFromSSAWithOptions)
added in v0.20.0
```
func ReadFromSSAWithOptions(i [io](/io).[Reader](/io#Reader), opts [SSAOptions](#SSAOptions)) (o *[Subtitles](#Subtitles), err [error](/builtin#error))
```
ReadFromSSAWithOptions parses an .ssa content
####
func [ReadFromSTL](https://github.com/asticode/go-astisub/blob/v0.26.0/stl.go#L186) [¶](#ReadFromSTL)
```
func ReadFromSTL(i [io](/io).[Reader](/io#Reader), opts [STLOptions](#STLOptions)) (o *[Subtitles](#Subtitles), err [error](/builtin#error))
```
ReadFromSTL parses an .stl content
####
func [ReadFromTTML](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L297) [¶](#ReadFromTTML)
```
func ReadFromTTML(i [io](/io).[Reader](/io#Reader)) (o *[Subtitles](#Subtitles), err [error](/builtin#error))
```
ReadFromTTML parses a .ttml content
####
func [ReadFromTeletext](https://github.com/asticode/go-astisub/blob/v0.26.0/teletext.go#L335) [¶](#ReadFromTeletext)
```
func ReadFromTeletext(r [io](/io).[Reader](/io#Reader), o [TeletextOptions](#TeletextOptions)) (s *[Subtitles](#Subtitles), err [error](/builtin#error))
```
ReadFromTeletext parses a teletext content
<http://www.etsi.org/deliver/etsi_en/300400_300499/300472/01.03.01_60/en_300472v010301p.pdf>
<http://www.etsi.org/deliver/etsi_i_ets/300700_300799/300706/01_60/ets_300706e01p.pdf>
TODO Update README TODO Add tests
####
func [ReadFromWebVTT](https://github.com/asticode/go-astisub/blob/v0.26.0/webvtt.go#L89) [¶](#ReadFromWebVTT)
```
func ReadFromWebVTT(i [io](/io).[Reader](/io#Reader)) (o *[Subtitles](#Subtitles), err [error](/builtin#error))
```
ReadFromWebVTT parses a .vtt content TODO Tags (u, i, b)
TODO Class
####
func (*Subtitles) [Add](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L403) [¶](#Subtitles.Add)
```
func (s *[Subtitles](#Subtitles)) Add(d [time](/time).[Duration](/time#Duration))
```
Add adds a duration to each time boundaries. As in the time package, duration can be negative.
####
func (*Subtitles) [ApplyLinearCorrection](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L656) [¶](#Subtitles.ApplyLinearCorrection)
added in v0.21.0
```
func (s *[Subtitles](#Subtitles)) ApplyLinearCorrection(actual1, desired1, actual2, desired2 [time](/time).[Duration](/time#Duration))
```
ApplyLinearCorrection applies linear correction
####
func (Subtitles) [Duration](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L417) [¶](#Subtitles.Duration)
```
func (s [Subtitles](#Subtitles)) Duration() [time](/time).[Duration](/time#Duration)
```
Duration returns the subtitles duration
####
func (*Subtitles) [ForceDuration](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L427) [¶](#Subtitles.ForceDuration)
```
func (s *[Subtitles](#Subtitles)) ForceDuration(d [time](/time).[Duration](/time#Duration), addDummyItem [bool](/builtin#bool))
```
ForceDuration updates the subtitles duration.
If requested duration is bigger, then we create a dummy item.
If requested duration is smaller, then we remove useless items and we cut the last item or add a dummy item.
####
func (*Subtitles) [Fragment](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L460) [¶](#Subtitles.Fragment)
```
func (s *[Subtitles](#Subtitles)) Fragment(f [time](/time).[Duration](/time#Duration))
```
Fragment fragments subtitles with a specific fragment duration
####
func (Subtitles) [IsEmpty](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L515) [¶](#Subtitles.IsEmpty)
```
func (s [Subtitles](#Subtitles)) IsEmpty() [bool](/builtin#bool)
```
IsEmpty returns whether the subtitles are empty
####
func (*Subtitles) [Merge](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L520) [¶](#Subtitles.Merge)
```
func (s *[Subtitles](#Subtitles)) Merge(i *[Subtitles](#Subtitles))
```
Merge merges subtitles i into subtitles
####
func (*Subtitles) [Optimize](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L541) [¶](#Subtitles.Optimize)
```
func (s *[Subtitles](#Subtitles)) Optimize()
```
Optimize optimizes subtitles
####
func (*Subtitles) [Order](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L598) [¶](#Subtitles.Order)
```
func (s *[Subtitles](#Subtitles)) Order()
```
Order orders items
####
func (*Subtitles) [RemoveStyling](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L611) [¶](#Subtitles.RemoveStyling)
```
func (s *[Subtitles](#Subtitles)) RemoveStyling()
```
RemoveStyling removes the styling from the subtitles
####
func (*Subtitles) [Unfragment](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L628) [¶](#Subtitles.Unfragment)
```
func (s *[Subtitles](#Subtitles)) Unfragment()
```
Unfragment unfragments subtitles
####
func (Subtitles) [Write](https://github.com/asticode/go-astisub/blob/v0.26.0/subtitles.go#L669) [¶](#Subtitles.Write)
```
func (s [Subtitles](#Subtitles)) Write(dst [string](/builtin#string)) (err [error](/builtin#error))
```
Write writes subtitles to a file
####
func (Subtitles) [WriteToSRT](https://github.com/asticode/go-astisub/blob/v0.26.0/srt.go#L124) [¶](#Subtitles.WriteToSRT)
```
func (s [Subtitles](#Subtitles)) WriteToSRT(o [io](/io).[Writer](/io#Writer)) (err [error](/builtin#error))
```
WriteToSRT writes subtitles in .srt format
####
func (Subtitles) [WriteToSSA](https://github.com/asticode/go-astisub/blob/v0.26.0/ssa.go#L1205) [¶](#Subtitles.WriteToSSA)
```
func (s [Subtitles](#Subtitles)) WriteToSSA(o [io](/io).[Writer](/io#Writer)) (err [error](/builtin#error))
```
WriteToSSA writes subtitles in .ssa format
####
func (Subtitles) [WriteToSTL](https://github.com/asticode/go-astisub/blob/v0.26.0/stl.go#L914) [¶](#Subtitles.WriteToSTL)
```
func (s [Subtitles](#Subtitles)) WriteToSTL(o [io](/io).[Writer](/io#Writer)) (err [error](/builtin#error))
```
WriteToSTL writes subtitles in .stl format
####
func (Subtitles) [WriteToTTML](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L563) [¶](#Subtitles.WriteToTTML)
```
func (s [Subtitles](#Subtitles)) WriteToTTML(o [io](/io).[Writer](/io#Writer)) (err [error](/builtin#error))
```
WriteToTTML writes subtitles in .ttml format
####
func (Subtitles) [WriteToWebVTT](https://github.com/asticode/go-astisub/blob/v0.26.0/webvtt.go#L352) [¶](#Subtitles.WriteToWebVTT)
```
func (s [Subtitles](#Subtitles)) WriteToWebVTT(o [io](/io).[Writer](/io#Writer)) (err [error](/builtin#error))
```
WriteToWebVTT writes subtitles in .vtt format
####
type [TTMLIn](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L45) [¶](#TTMLIn)
```
type TTMLIn struct {
Framerate [int](/builtin#int) `xml:"frameRate,attr"`
Lang [string](/builtin#string) `xml:"lang,attr"`
Metadata [TTMLInMetadata](#TTMLInMetadata) `xml:"head>metadata"`
Regions [][TTMLInRegion](#TTMLInRegion) `xml:"head>layout>region"`
Styles [][TTMLInStyle](#TTMLInStyle) `xml:"head>styling>style"`
Subtitles [][TTMLInSubtitle](#TTMLInSubtitle) `xml:"body>div>p"`
Tickrate [int](/builtin#int) `xml:"tickRate,attr"`
XMLName [xml](/encoding/xml).[Name](/encoding/xml#Name) `xml:"tt"`
}
```
TTMLIn represents an input TTML that must be unmarshaled We split it from the output TTML as we can't add strict namespace without breaking retrocompatibility
####
type [TTMLInDuration](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L209) [¶](#TTMLInDuration)
```
type TTMLInDuration struct {
// contains filtered or unexported fields
}
```
TTMLInDuration represents an input TTML duration
####
func (*TTMLInDuration) [UnmarshalText](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L220) [¶](#TTMLInDuration.UnmarshalText)
```
func (d *[TTMLInDuration](#TTMLInDuration)) UnmarshalText(i [][byte](/builtin#byte)) (err [error](/builtin#error))
```
UnmarshalText implements the TextUnmarshaler interface Possible formats are:
- hh:mm:ss.mmm
- hh:mm:ss:fff (fff being frames)
- [ticks]t ([ticks] being the tick amount)
####
type [TTMLInHeader](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L136) [¶](#TTMLInHeader)
```
type TTMLInHeader struct {
ID [string](/builtin#string) `xml:"id,attr,omitempty"`
Style [string](/builtin#string) `xml:"style,attr,omitempty"`
[TTMLInStyleAttributes](#TTMLInStyleAttributes)
}
```
TTMLInHeader represents an input TTML header
####
type [TTMLInItem](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L201) [¶](#TTMLInItem)
```
type TTMLInItem struct {
Style [string](/builtin#string) `xml:"style,attr,omitempty"`
Text [string](/builtin#string) `xml:",chardata"`
[TTMLInStyleAttributes](#TTMLInStyleAttributes)
XMLName [xml](/encoding/xml).[Name](/encoding/xml#Name)
}
```
TTMLInItem represents an input TTML item
####
type [TTMLInItems](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L166) [¶](#TTMLInItems)
```
type TTMLInItems [][TTMLInItem](#TTMLInItem)
```
TTMLInItems represents input TTML items
####
func (*TTMLInItems) [UnmarshalXML](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L169) [¶](#TTMLInItems.UnmarshalXML)
```
func (i *[TTMLInItems](#TTMLInItems)) UnmarshalXML(d *[xml](/encoding/xml).[Decoder](/encoding/xml#Decoder), start [xml](/encoding/xml).[StartElement](/encoding/xml#StartElement)) (err [error](/builtin#error))
```
UnmarshalXML implements the XML unmarshaler interface
####
type [TTMLInMetadata](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L70) [¶](#TTMLInMetadata)
```
type TTMLInMetadata struct {
Copyright [string](/builtin#string) `xml:"copyright"`
Title [string](/builtin#string) `xml:"title"`
}
```
TTMLInMetadata represents an input TTML Metadata
####
type [TTMLInRegion](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L143) [¶](#TTMLInRegion)
```
type TTMLInRegion struct {
[TTMLInHeader](#TTMLInHeader)
XMLName [xml](/encoding/xml).[Name](/encoding/xml#Name) `xml:"region"`
}
```
TTMLInRegion represents an input TTML region
####
type [TTMLInStyle](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L149) [¶](#TTMLInStyle)
```
type TTMLInStyle struct {
[TTMLInHeader](#TTMLInHeader)
XMLName [xml](/encoding/xml).[Name](/encoding/xml#Name) `xml:"style"`
}
```
TTMLInStyle represents an input TTML style
####
type [TTMLInStyleAttributes](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L76) [¶](#TTMLInStyleAttributes)
```
type TTMLInStyleAttributes struct {
BackgroundColor *[string](/builtin#string) `xml:"backgroundColor,attr,omitempty"`
Color *[string](/builtin#string) `xml:"color,attr,omitempty"`
Direction *[string](/builtin#string) `xml:"direction,attr,omitempty"`
Display *[string](/builtin#string) `xml:"display,attr,omitempty"`
DisplayAlign *[string](/builtin#string) `xml:"displayAlign,attr,omitempty"`
Extent *[string](/builtin#string) `xml:"extent,attr,omitempty"`
FontFamily *[string](/builtin#string) `xml:"fontFamily,attr,omitempty"`
FontSize *[string](/builtin#string) `xml:"fontSize,attr,omitempty"`
FontStyle *[string](/builtin#string) `xml:"fontStyle,attr,omitempty"`
FontWeight *[string](/builtin#string) `xml:"fontWeight,attr,omitempty"`
LineHeight *[string](/builtin#string) `xml:"lineHeight,attr,omitempty"`
Opacity *[string](/builtin#string) `xml:"opacity,attr,omitempty"`
Origin *[string](/builtin#string) `xml:"origin,attr,omitempty"`
Overflow *[string](/builtin#string) `xml:"overflow,attr,omitempty"`
Padding *[string](/builtin#string) `xml:"padding,attr,omitempty"`
ShowBackground *[string](/builtin#string) `xml:"showBackground,attr,omitempty"`
TextAlign *[string](/builtin#string) `xml:"textAlign,attr,omitempty"`
TextDecoration *[string](/builtin#string) `xml:"textDecoration,attr,omitempty"`
TextOutline *[string](/builtin#string) `xml:"textOutline,attr,omitempty"`
UnicodeBidi *[string](/builtin#string) `xml:"unicodeBidi,attr,omitempty"`
Visibility *[string](/builtin#string) `xml:"visibility,attr,omitempty"`
WrapOption *[string](/builtin#string) `xml:"wrapOption,attr,omitempty"`
WritingMode *[string](/builtin#string) `xml:"writingMode,attr,omitempty"`
ZIndex *[int](/builtin#int) `xml:"zIndex,attr,omitempty"`
}
```
TTMLInStyleAttributes represents input TTML style attributes
####
type [TTMLInSubtitle](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L155) [¶](#TTMLInSubtitle)
```
type TTMLInSubtitle struct {
Begin *[TTMLInDuration](#TTMLInDuration) `xml:"begin,attr,omitempty"`
End *[TTMLInDuration](#TTMLInDuration) `xml:"end,attr,omitempty"`
ID [string](/builtin#string) `xml:"id,attr,omitempty"`
Items [string](/builtin#string) `xml:",innerxml"` // We must store inner XML here since there's no tag to describe both any tag and chardata
Region [string](/builtin#string) `xml:"region,attr,omitempty"`
Style [string](/builtin#string) `xml:"style,attr,omitempty"`
[TTMLInStyleAttributes](#TTMLInStyleAttributes)
}
```
TTMLInSubtitle represents an input TTML subtitle
####
type [TTMLOut](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L438) [¶](#TTMLOut)
```
type TTMLOut struct {
Lang [string](/builtin#string) `xml:"xml:lang,attr,omitempty"`
Metadata *[TTMLOutMetadata](#TTMLOutMetadata) `xml:"head>metadata,omitempty"`
Styles [][TTMLOutStyle](#TTMLOutStyle) `xml:"head>styling>style,omitempty"` //!\\ Order is important! Keep Styling above Layout
Regions [][TTMLOutRegion](#TTMLOutRegion) `xml:"head>layout>region,omitempty"`
Subtitles [][TTMLOutSubtitle](#TTMLOutSubtitle) `xml:"body>div>p,omitempty"`
XMLName [xml](/encoding/xml).[Name](/encoding/xml#Name) `xml:"http://www.w3.org/ns/ttml tt"`
XMLNamespaceTTM [string](/builtin#string) `xml:"xmlns:ttm,attr"`
XMLNamespaceTTS [string](/builtin#string) `xml:"xmlns:tts,attr"`
}
```
TTMLOut represents an output TTML that must be marshaled We split it from the input TTML as this time we'll add strict namespaces
####
type [TTMLOutDuration](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L555) [¶](#TTMLOutDuration)
```
type TTMLOutDuration [time](/time).[Duration](/time#Duration)
```
TTMLOutDuration represents an output TTML duration
####
func (TTMLOutDuration) [MarshalText](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L558) [¶](#TTMLOutDuration.MarshalText)
```
func (t [TTMLOutDuration](#TTMLOutDuration)) MarshalText() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalText implements the TextMarshaler interface
####
type [TTMLOutHeader](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L517) [¶](#TTMLOutHeader)
```
type TTMLOutHeader struct {
ID [string](/builtin#string) `xml:"xml:id,attr,omitempty"`
Style [string](/builtin#string) `xml:"style,attr,omitempty"`
[TTMLOutStyleAttributes](#TTMLOutStyleAttributes)
}
```
TTMLOutHeader represents an output TTML header
####
type [TTMLOutItem](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L547) [¶](#TTMLOutItem)
```
type TTMLOutItem struct {
Style [string](/builtin#string) `xml:"style,attr,omitempty"`
Text [string](/builtin#string) `xml:",chardata"`
[TTMLOutStyleAttributes](#TTMLOutStyleAttributes)
XMLName [xml](/encoding/xml).[Name](/encoding/xml#Name)
}
```
TTMLOutItem represents an output TTML Item
####
type [TTMLOutMetadata](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L450) [¶](#TTMLOutMetadata)
```
type TTMLOutMetadata struct {
Copyright [string](/builtin#string) `xml:"ttm:copyright,omitempty"`
Title [string](/builtin#string) `xml:"ttm:title,omitempty"`
}
```
TTMLOutMetadata represents an output TTML Metadata
####
type [TTMLOutRegion](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L524) [¶](#TTMLOutRegion)
```
type TTMLOutRegion struct {
[TTMLOutHeader](#TTMLOutHeader)
XMLName [xml](/encoding/xml).[Name](/encoding/xml#Name) `xml:"region"`
}
```
TTMLOutRegion represents an output TTML region
####
type [TTMLOutStyle](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L530) [¶](#TTMLOutStyle)
```
type TTMLOutStyle struct {
[TTMLOutHeader](#TTMLOutHeader)
XMLName [xml](/encoding/xml).[Name](/encoding/xml#Name) `xml:"style"`
}
```
TTMLOutStyle represents an output TTML style
####
type [TTMLOutStyleAttributes](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L456) [¶](#TTMLOutStyleAttributes)
```
type TTMLOutStyleAttributes struct {
BackgroundColor *[string](/builtin#string) `xml:"tts:backgroundColor,attr,omitempty"`
Color *[string](/builtin#string) `xml:"tts:color,attr,omitempty"`
Direction *[string](/builtin#string) `xml:"tts:direction,attr,omitempty"`
Display *[string](/builtin#string) `xml:"tts:display,attr,omitempty"`
DisplayAlign *[string](/builtin#string) `xml:"tts:displayAlign,attr,omitempty"`
Extent *[string](/builtin#string) `xml:"tts:extent,attr,omitempty"`
FontFamily *[string](/builtin#string) `xml:"tts:fontFamily,attr,omitempty"`
FontSize *[string](/builtin#string) `xml:"tts:fontSize,attr,omitempty"`
FontStyle *[string](/builtin#string) `xml:"tts:fontStyle,attr,omitempty"`
FontWeight *[string](/builtin#string) `xml:"tts:fontWeight,attr,omitempty"`
LineHeight *[string](/builtin#string) `xml:"tts:lineHeight,attr,omitempty"`
Opacity *[string](/builtin#string) `xml:"tts:opacity,attr,omitempty"`
Origin *[string](/builtin#string) `xml:"tts:origin,attr,omitempty"`
Overflow *[string](/builtin#string) `xml:"tts:overflow,attr,omitempty"`
Padding *[string](/builtin#string) `xml:"tts:padding,attr,omitempty"`
ShowBackground *[string](/builtin#string) `xml:"tts:showBackground,attr,omitempty"`
TextAlign *[string](/builtin#string) `xml:"tts:textAlign,attr,omitempty"`
TextDecoration *[string](/builtin#string) `xml:"tts:textDecoration,attr,omitempty"`
TextOutline *[string](/builtin#string) `xml:"tts:textOutline,attr,omitempty"`
UnicodeBidi *[string](/builtin#string) `xml:"tts:unicodeBidi,attr,omitempty"`
Visibility *[string](/builtin#string) `xml:"tts:visibility,attr,omitempty"`
WrapOption *[string](/builtin#string) `xml:"tts:wrapOption,attr,omitempty"`
WritingMode *[string](/builtin#string) `xml:"tts:writingMode,attr,omitempty"`
ZIndex *[int](/builtin#int) `xml:"tts:zIndex,attr,omitempty"`
}
```
TTMLOutStyleAttributes represents output TTML style attributes
####
type [TTMLOutSubtitle](https://github.com/asticode/go-astisub/blob/v0.26.0/ttml.go#L536) [¶](#TTMLOutSubtitle)
```
type TTMLOutSubtitle struct {
Begin [TTMLOutDuration](#TTMLOutDuration) `xml:"begin,attr"`
End [TTMLOutDuration](#TTMLOutDuration) `xml:"end,attr"`
ID [string](/builtin#string) `xml:"id,attr,omitempty"`
Items [][TTMLOutItem](#TTMLOutItem)
Region [string](/builtin#string) `xml:"region,attr,omitempty"`
Style [string](/builtin#string) `xml:"style,attr,omitempty"`
[TTMLOutStyleAttributes](#TTMLOutStyleAttributes)
}
```
TTMLOutSubtitle represents an output TTML subtitle
####
type [TeletextOptions](https://github.com/asticode/go-astisub/blob/v0.26.0/teletext.go#L325) [¶](#TeletextOptions)
```
type TeletextOptions struct {
Page [int](/builtin#int)
PID [int](/builtin#int)
}
```
TeletextOptions represents teletext options |
@nmarks/react-docgen | npm | JavaScript | react-docgen
===
`react-docgen` is a CLI and toolbox to help extracting information from [React](http://facebook.github.io/react/) components, and generate documentation from it.
It uses [recast](https://github.com/benjamn/recast) and [babylon](https://github.com/babel/babel/tree/master/packages/babylon) to parse the source into an AST and provides methods to process this AST to extract the desired information. The output / return value is a JSON blob / JavaScript object.
It provides a default implementation for React components defined via
`React.createClass`, [ES2015 class definitions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes) or functions
(stateless components). These component definitions must follow certain guidelines in order to be analyzable (see below for more info).
Install
---
Install the module directly from npm:
```
npm install -g react-docgen
```
CLI
---
Installing the module adds a `react-docgen` executable which allows you to convert a single file, multiple files or an input stream. We are trying to make the executable as versatile as possible so that it can be integrated into many workflows.
```
Usage: react-docgen [path]... [options]
path A component file or directory. If no path is provided it reads from stdin.
Options:
-o FILE, --out FILE store extracted information in FILE
--pretty pretty print JSON
-x, --extension File extensions to consider. Repeat to define multiple extensions. Default: [js,jsx]
-i, --ignore Folders to ignore. Default: [node_modules,__tests__]
--resolver RESOLVER Resolver name (findAllComponentDefinitions, findExportedComponentDefinition) or
path to a module that exports a resolver. [findExportedComponentDefinition]
Extract meta information from React components.
If a directory is passed, it is recursively traversed.
```
By default, `react-docgen` will look for the exported component created through
`React.createClass`, a class definition or a function (stateless component) in each file. You can change that behavior with the `--resolver` option, which either expects the name of a built-in resolver or a path to JavaScript module exporting a resolver function. Have a look below for [more information about resolvers](#resolver).
Have a look at `example/` for an example of how to use the result to generate a markdown version of the documentation.
API
---
The tool can be used programmatically to extract component information and customize the extraction process:
```
var reactDocs = require('react-docgen');
var componentInfo = reactDocs.parse(src);
```
As with the CLI, this will look for the exported component created through `React.createClass` or a class definition in the provided source. The whole process of analyzing the source code is separated into two parts:
* Locating/finding the nodes in the AST which define the component
* Extracting information from those nodes
`parse` accepts more arguments with which this behavior can be customized.
###
parse(source [, resolver [, handlers]])
| Parameter | Type | Description |
| --- | --- | --- |
| source | string | The source text |
| resolver | function | A function of the form `(ast: ASTNode, recast: Object) => (NodePath |
| handlers | Array<function> | An array of functions of the form `(documentation: Documentation, definition: NodePath) => void`. Each function is called with a `Documentation` object and a reference to the component definition as returned by `resolver`. Handlers extract relevant information from the definition and augment `documentation`. |
####
resolver
The resolver's task is to extract those parts from the source code which the handlers can analyze. For example, the `findExportedComponentDefinition` resolver inspects the AST to find
```
var Component = React.createClass(<def>);
module.exports = Component;
// or
class Component extends React.Component {
// ...
}
module.exports = Component;
```
and returns the ObjectExpression to which `<def>` resolves to, or the class declaration itself.
`findAllComponentDefinitions` works similarly, but finds *all* `React.createClass` calls and class definitions, not only the one that is exported.
This makes it easy, together with the utility methods created to analyze the AST, to introduce new or custom resolver methods. For example, a resolver could look for plain ObjectExpressions with a `render` method.
####
handlers
Handlers do the actual work and extract the desired information from the result the resolver returned. Like the resolver, they try to delegate as much work as possible to the reusable utility functions.
For example, while the `propTypesHandler` expects the prop types definition to be an ObjectExpression and be available as `propTypes` in the component definition, most of the work is actually performed by the `getPropType` utility function.
Guidelines for default resolvers and handlers
---
* Modules have to export a single component, and only that component is analyzed.
* When using `React.createClass`, the component definition (the value passed to it) must resolve to an object literal.
* When using classes, the class must either `extend React.Component` *or* define a `render()` method.
* `propTypes` must be an object literal or resolve to an object literal in the same file.
* The `return` statement in `getDefaultProps` must contain an object literal.
PropTypes
---
###
Example
For the following component
```
var React = require('react');
/**
* General component description.
*/
var Component = React.createClass({
propTypes: {
/**
* Description of prop "foo".
*/
foo: React.PropTypes.number,
/**
* Description of prop "bar" (a custom validation function).
*/
bar: function(props, propName, componentName) {
// ...
},
baz: React.PropTypes.oneOfType([
React.PropTypes.number,
React.PropTypes.string
]),
},
getDefaultProps: function() {
return {
foo: 42,
bar: 21
};
},
render: function() {
// ...
}
});
module.exports = Component;
```
we are getting this output:
```
{
"props": {
"foo": {
"type": {
"name": "number"
},
"required": false,
"description": "Description of prop \"foo\".",
"defaultValue": {
"value": "42",
"computed": false
}
},
"bar": {
"type": {
"name": "custom"
},
"required": false,
"description": "Description of prop \"bar\" (a custom validation function).",
"defaultValue": {
"value": "21",
"computed": false
}
},
"baz": {
"type": {
"name": "union",
"value": [
{
"name": "number"
},
{
"name": "string"
}
]
},
"required": false,
"description": ""
}
},
"description": "General component description."
}
```
Flow Type support
---
If you are using [flow](http://flowtype.org/) then react-docgen can also extract the flow type annotations. As flow has a way more advanced and fine granular type system, the returned types from react-docgen are different in comparison when using `React.PropTypes`.
> **Note**: react-docgen will not be able to grab the type definition if the type is imported or declared in a different file.
###
Example
For the following component
```
import React, { Component } from 'react';
type Props = {
/** Description of prop "foo". */
primitive: number,
/** Description of prop "bar". */
literalsAndUnion: 'string' | 'otherstring' | number,
arr: Array<any>,
func?: (value: string) => void,
obj?: { subvalue: ?boolean },
};
/**
* General component description.
*/
export default class MyComponent extends Component<void, Props, void> {
props: Props;
render(): ?ReactElement {
// ...
}
}
```
we are getting this output:
```
{
"description":"General component description.",
"props":{
"primitive":{
"flowType":{ "name":"number" },
"required":true,
"description":"Description of prop \"foo\"."
},
"literalsAndUnion":{
"flowType":{
"name":"union",
"raw":"'string' | 'otherstring' | number",
"elements":[
{ "name":"literal", "value":"'string'" },
{ "name":"literal", "value":"'otherstring'" },
{ "name":"number" }
]
},
"required":true,
"description":"Description of prop \"bar\"."
},
"arr":{
"flowType":{
"name":"Array",
"elements":[
{ "name":"any" }
],
"raw":"Array<any>"
},
"required":true
},
"func":{
"flowType":{
"name":"signature",
"type":"function",
"raw":"(value: string) => void",
"signature":{
"arguments":[
{ "name":"value", "type":{ "name":"string" } }
],
"return":{ "name":"void" }
}
},
"required":false
},
"obj":{
"flowType":{
"name":"signature",
"type":"object",
"raw":"{ subvalue: ?boolean }",
"signature":{
"properties":[
{
"key":"subvalue",
"value":{
"name":"boolean",
"nullable":true,
"required":true
}
}
]
}
},
"required":false
}
}
}
```
###
Types
Here is a list of all the available types and its result structure.
| Name | Examples | Result |
| --- | --- | --- |
| Simple | `let x: string;``let x: number;``let x: boolean;``let x: any;``let x: void;``let x: Object;``let x: String;``let x: MyClass;` | `{ "name": "<type>" }` |
| Literals | `let x: 'foo';``let x: 1;``let x: true;` | `{ "name": "literal", "value": "<rawvalue>" }` |
| Typed Classes | `let x: Array<foo>;``let x: Class<foo>;``let x: MyClass<bar>;` | `{ "name": "<type>", "elements": [{ <element-type> }, ...] }` |
| Object Signature | `let x: { foo: string, bar?: mixed };``let x: { [key: string]: string, foo: number };` | ```{ "name": "signature", "type": "object", "raw": "", "signature": { "properties": [{ "key": "" |
| Function Signature | `let x: (x: string) => void;` | `{ "name": "signature", "type": "function", "raw": "<raw-signature>", "signature": { "arguments": [{ "name": "<argument-name>", "type": { <argument-type> } }, ...], "return": { <return-type> } } }` |
| Callable-Object/Function-Object Signature | `let x: { (x: string): void, prop: string };` | ```{ "name": "signature", "type": "object", "raw": "", "signature": { "properties": [{ "key": "" |
| Tuple | `let x: [foo, "value", number];` | `{ "name": "tuple", "raw": "<raw-signature>", "elements": [{ <element-type> }, ...] }` |
| Union | ```let x: number | string;``` |
| Intersect | `let x: number & string;` | `{ "name": "intersect", "raw": "<raw-signature>", "elements": [{ <element-type> }, ...] }` |
| Nullable modifier | `let x: ?number;` | `{ "name": "number", "nullable": true }` |
Result data structure
---
The structure of the JSON blob / JavaScript object is as follows:
```
{
["description": string,]
["props": {
"<propName>": {
"type": {
"name": "<typeName>",
["value": <typeValue>]
["raw": string]
},
"flowType": <flowType>,
"required": boolean,
"description": string,
["defaultValue": {
"value": string,
"computed": boolean
}]
},
...
},]
["composes": <componentNames>]
}
```
(`[...]` means the property may not exist if such information was not found in the component definition)
* `<propName>`: For each prop that was found, there will be an entry in `props` under the same name.
* `<typeName>`: The name of the type, which is usually corresponds to the function name in `React.PropTypes`. However, for types define with `oneOf`, we use `"enum"` and for `oneOfType` we use `"union"`. If a custom function is provided or the type cannot be resolved to anything of `React.PropTypes`, we use `"custom"`.
* `<typeValue>`: Some types accept parameters which define the type in more detail (such as `arrayOf`, `instanceOf`, `oneOf`, etc). Those are stored in `<typeValue>`. The data type of `<typeValue>` depends on the type definition.
* `<flowType>`: If using flow type this property contains the parsed flow type as can be seen in the table above.
Readme
---
### Keywords
* react
* documentation-generation |
@compendia/ipfs-commonjs | npm | JavaScript | ###
The JavaScript implementation of the IPFS protocol
> **Upgrading from <=0.40?** See the [release notes](https://github.com/ipfs/js-ipfs/issues/2656) for the list of API changes and the [migration guide](https://github.com/ipfs/js-ipfs/tree/master/docs/docs/MIGRATION-TO-ASYNC-AWAIT.md).
`ipfs` is the core API, a CLI and a HTTP server that functions as a HTTP to IPFS bridge and an RPC endpoint.
If you want to integrate IPFS into your application without including a CLI or HTTP server, see the [ipfs-core](https://github.com/ipfs/js-ipfs/tree/master/packages/ipfs-core) module.
Lead Maintainer
---
[<NAME>](http://github.com/achingbrain)
Table of Contents
---
* [Getting Started](#getting-started)
+ [Install](#install)
+ [Next Steps](#next-steps)
* [Want to hack on IPFS?](#want-to-hack-on-ipfs)
* [License](#license)
Getting Started
---
We've come a long way, but this project is still in Alpha, lots of development is happening, APIs might change, beware of 🐉..
###
Install
Installing `ipfs` globally will give you the `jsipfs` command which you can use to start a daemon running:
```
$ npm install -g ipfs
$ jsipfs daemon Initializing IPFS daemon...
js-ipfs version: x.x.x System version: x64/darwin Node.js version: x.x.x Swarm listening on /ip4/127.0
.... more output
```
You can then add a file:
```
$ jsipfs add ./hello-world.txt added QmXXY5ZxbtuYj6DnfApLiGstzPN7fvSyigrRee3hDWPCaf hello-world.txt
```
###
Next Steps
* Read the [docs](https://github.com/ipfs/js-ipfs/tree/master/docs)
* Look into the [examples](https://github.com/ipfs/js-ipfs/tree/master/examples) to learn how to spawn an IPFS node in Node.js and in the Browser
* Consult the [Core API docs](https://github.com/ipfs/js-ipfs/tree/master/docs/core-api) to see what you can do with an IPFS node
* Visit <https://dweb-primer.ipfs.io> to learn about IPFS and the concepts that underpin it
* Head over to <https://proto.school> to take interactive tutorials that cover core IPFS APIs
* Check out <https://docs.ipfs.io> for tips, how-tos and more
* See <https://blog.ipfs.io> for news and more
* Need help? Please ask 'How do I?' questions on <https://discuss.ipfs.ioWant to hack on IPFS?
---
The IPFS implementation in JavaScript needs your help! There are a few things you can do right now to help out:
Read the [Code of Conduct](https://github.com/ipfs/community/blob/master/code-of-conduct.md) and [JavaScript Contributing Guidelines](https://github.com/ipfs/community/blob/master/CONTRIBUTING_JS.md).
* **Check out existing issues** The [issue list](https://github.com/ipfs/js-ipfs/issues) has many that are marked as ['help wanted'](https://github.com/ipfs/js-ipfs/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22help+wanted%22) or ['difficulty:easy'](https://github.com/ipfs/js-ipfs/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3Adifficulty%3Aeasy) which make great starting points for development, many of which can be tackled with no prior IPFS knowledge
* **Look at the [IPFS Roadmap](https://github.com/ipfs/roadmap)** This are the high priority items being worked on right now
* **Perform code reviews** More eyes will help a. speed the project along b. ensure quality, and c. reduce possible future bugs.
* **Add tests**. There can never be enough tests.
* **Join the [Weekly Core Implementations Call](https://github.com/ipfs/team-mgmt/issues/992)** it's where everyone discusses what's going on with IPFS and what's next
License
---
Readme
---
### Keywords
* IPFS |
github.com/nacos-group/nacos-sdk-go | go | Go | README
[¶](#section-readme)
---
### Nacos-sdk-go [中文](https://github.com/nacos-group/nacos-sdk-go/blob/v1.1.4/README_CN.md)
[](https://travis-ci.org/nacos-group/nacos-sdk-go) [](https://goreportcard.com/report/github.com/nacos-group/nacos-sdk-go) 
---
#### Nacos-sdk-go
Nacos-sdk-go for Go client allows you to access Nacos service,it supports service discovery and dynamic configuration.
#### Requirements
Supported Go version over 1.12
Supported Nacos version over 1.x
#### Installation
Use `go get` to install SDK:
```
$ go get -u github.com/nacos-group/nacos-sdk-go
```
#### Quick Examples
* ClientConfig
```
constant.ClientConfig{
TimeoutMs uint64 // timeout for requesting Nacos server, default value is 10000ms
NamespaceId string // the namespaceId of Nacos.When namespace is public, fill in the blank string here.
AppName string // the appName
Endpoint string // the endpoint for get Nacos server addresses
RegionId string // the regionId for kms
AccessKey string // the AccessKey for kms
SecretKey string // the SecretKey for kms
OpenKMS bool // it's to open kms,default is false. https://help.aliyun.com/product/28933.html
CacheDir string // the directory for persist nacos service info,default value is current path
UpdateThreadNum int // the number of gorutine for update nacos service info,default value is 20
NotLoadCacheAtStart bool // not to load persistent nacos service info in CacheDir at start time
UpdateCacheWhenEmpty bool // update cache when get empty service instance from server
Username string // the username for nacos auth
Password string // the password for nacos auth
LogDir string // the directory for log, default is current path
LogLevel string // the level of log, it's must be debug,info,warn,error, default value is info
LogSampling *ClientLogSamplingConfig // the sampling config of log
ContextPath string // the nacos server contextpath
LogRollingConfig *ClientLogRollingConfig // the log rolling config
}
```
* ServerConfig
```
constant.ServerConfig{
ContextPath string // the nacos server context path
IpAddr string // the nacos server address
Port uint64 // the nacos server port
Scheme string // the nacos server scheme
}
```
**Note:We can config multiple ServerConfig,the client will rotate request the servers**
##### Create client
```
//create clientConfig clientConfig := constant.ClientConfig{
NamespaceId: "e525eafa-f7d7-4029-83d9-008937f9d468", //we can create multiple clients with different namespaceId to support multiple namespace.When namespace is public, fill in the blank string here.
TimeoutMs: 5000,
NotLoadCacheAtStart: true,
LogDir: "/tmp/nacos/log",
CacheDir: "/tmp/nacos/cache",
LogLevel: "debug",
}
//Another way of create clientConfig clientConfig := *constant.NewClientConfig(
constant.WithNamespaceId("e525eafa-f7d7-4029-83d9-008937f9d468"), //When namespace is public, fill in the blank string here.
constant.WithTimeoutMs(5000),
constant.WithNotLoadCacheAtStart(true),
constant.WithLogDir("/tmp/nacos/log"),
constant.WithCacheDir("/tmp/nacos/cache"),
constant.WithLogLevel("debug"),
)
// At least one ServerConfig serverConfigs := []constant.ServerConfig{
{
IpAddr: "console1.nacos.io",
ContextPath: "/nacos",
Port: 80,
Scheme: "http",
},
{
IpAddr: "console2.nacos.io",
ContextPath: "/nacos",
Port: 80,
Scheme: "http",
},
}
//Another way of create serverConfigs serverConfigs := []constant.ServerConfig{
*constant.NewServerConfig(
"console1.nacos.io",
80,
constant.WithScheme("http"),
constant.WithContextPath("/nacos")
),
*constant.NewServerConfig(
"console2.nacos.io",
80,
constant.WithScheme("http"),
constant.WithContextPath("/nacos")
),
}
// Create naming client for service discovery _, _ = clients.CreateNamingClient(map[string]interface{}{
"serverConfigs": serverConfigs,
"clientConfig": clientConfig,
})
// Create config client for dynamic configuration _, _ = clients.CreateConfigClient(map[string]interface{}{
"serverConfigs": serverConfigs,
"clientConfig": clientConfig,
})
// Another way of create naming client for service discovery (recommend)
namingClient, err := clients.NewNamingClient(
vo.NacosClientParam{
ClientConfig: &clientConfig,
ServerConfigs: serverConfigs,
},
)
// Another way of create config client for dynamic configuration (recommend)
configClient, err := clients.NewConfigClient(
vo.NacosClientParam{
ClientConfig: &clientConfig,
ServerConfigs: serverConfigs,
},
)
```
##### Create client for ACM
<https://help.aliyun.com/document_detail/130146.html>
```
cc := constant.ClientConfig{
Endpoint: "acm.aliyun.com:8080",
NamespaceId: "e525eafa-f7d7-4029-83d9-008937f9d468",
RegionId: "cn-shanghai",
AccessKey: "<KEY>",
SecretKey: "<KEY>",
OpenKMS: true,
TimeoutMs: 5000,
LogLevel: "debug",
}
// a more graceful way to create config client client, err := clients.NewConfigClient(
vo.NacosClientParam{
ClientConfig: &cc,
},
)
```
##### Service Discovery
* Register instance:RegisterInstance
```
success, err := namingClient.RegisterInstance(vo.RegisterInstanceParam{
Ip: "10.0.0.11",
Port: 8848,
ServiceName: "demo.go",
Weight: 10,
Enable: true,
Healthy: true,
Ephemeral: true,
Metadata: map[string]string{"idc":"shanghai"},
ClusterName: "cluster-a", // default value is DEFAULT
GroupName: "group-a", // default value is DEFAULT_GROUP
})
```
* Deregister instance:DeregisterInstance
```
success, err := namingClient.DeregisterInstance(vo.DeregisterInstanceParam{
Ip: "10.0.0.11",
Port: 8848,
ServiceName: "demo.go",
Ephemeral: true,
Cluster: "cluster-a", // default value is DEFAULT
GroupName: "group-a", // default value is DEFAULT_GROUP
})
```
* Get service:GetService
```
services, err := namingClient.GetService(vo.GetServiceParam{
ServiceName: "demo.go",
Clusters: []string{"cluster-a"}, // default value is DEFAULT
GroupName: "group-a", // default value is DEFAULT_GROUP
})
```
* Get all instances:SelectAllInstances
```
// SelectAllInstance return all instances,include healthy=false,enable=false,weight<=0 instances, err := namingClient.SelectAllInstances(vo.SelectAllInstancesParam{
ServiceName: "demo.go",
GroupName: "group-a", // default value is DEFAULT_GROUP
Clusters: []string{"cluster-a"}, // default value is DEFAULT
})
```
* Get instances :SelectInstances
```
// SelectInstances only return the instances of healthy=${HealthyOnly},enable=true and weight>0 instances, err := namingClient.SelectInstances(vo.SelectInstancesParam{
ServiceName: "demo.go",
GroupName: "group-a", // default value is DEFAULT_GROUP
Clusters: []string{"cluster-a"}, // default value is DEFAULT
HealthyOnly: true,
})
```
* Get one healthy instance(WRR):SelectOneHealthyInstance
```
// SelectOneHealthyInstance return one instance by WRR strategy for load balance
// And the instance should be health=true,enable=true and weight>0 instance, err := namingClient.SelectOneHealthyInstance(vo.SelectOneHealthInstanceParam{
ServiceName: "demo.go",
GroupName: "group-a", // default value is DEFAULT_GROUP
Clusters: []string{"cluster-a"}, // default value is DEFAULT
})
```
* Listen service change event:Subscribe
```
// Subscribe key = serviceName+groupName+cluster
// Note: We call add multiple SubscribeCallback with the same key.
err := namingClient.Subscribe(vo.SubscribeParam{
ServiceName: "demo.go",
GroupName: "group-a", // default value is DEFAULT_GROUP
Clusters: []string{"cluster-a"}, // default value is DEFAULT
SubscribeCallback: func(services []model.SubscribeService, err error) {
log.Printf("\n\n callback return services:%s \n\n", utils.ToJsonString(services))
},
})
```
* Cancel listen of service change event:Unsubscribe
```
err := namingClient.Unsubscribe(vo.SubscribeParam{
ServiceName: "demo.go",
GroupName: "group-a", // default value is DEFAULT_GROUP
Clusters: []string{"cluster-a"}, // default value is DEFAULT
SubscribeCallback: func(services []model.SubscribeService, err error) {
log.Printf("\n\n callback return services:%s \n\n", utils.ToJsonString(services))
},
})
```
* Get all services name:GetAllServicesInfo
```
serviceInfos, err := client.GetAllServicesInfo(vo.GetAllServiceInfoParam{
NameSpace: "0e83cc81-9d8c-4bb8-a28a-ff703187543f",
PageNo: 1,
PageSize: 10,
}),
```
##### Dynamic configuration
* publish config:PublishConfig
```
success, err := configClient.PublishConfig(vo.ConfigParam{
DataId: "dataId",
Group: "group",
Content: "hello world!222222"})
```
* delete config:DeleteConfig
```
success, err = configClient.DeleteConfig(vo.ConfigParam{
DataId: "dataId",
Group: "group"})
```
* get config info:GetConfig
```
content, err := configClient.GetConfig(vo.ConfigParam{
DataId: "dataId",
Group: "group"})
```
* Listen config change event:ListenConfig
```
err := configClient.ListenConfig(vo.ConfigParam{
DataId: "dataId",
Group: "group",
OnChange: func(namespace, group, dataId, data string) {
fmt.Println("group:" + group + ", dataId:" + dataId + ", data:" + data)
},
})
```
* Cancel the listening of config change event:CancelListenConfig
```
err := configClient.CancelListenConfig(vo.ConfigParam{
DataId: "dataId",
Group: "group",
})
```
* Search config: SearchConfig
```
configPage, err := configClient.SearchConfig(vo.SearchConfigParam{
Search: "blur",
DataId: "",
Group: "",
PageNo: 1,
PageSize: 10,
})
```
#### Example
We can run example to learn how to use nacos go client.
* [Config Example](https://github.com/nacos-group/nacos-sdk-go/blob/v1.1.4/example/config)
* [Naming Example](https://github.com/nacos-group/nacos-sdk-go/blob/v1.1.4/example/service)
#### Documentation
You can view the open-api documentation from the [Nacos open-api wepsite](https://nacos.io/en-us/docs/open-api.html).
You can view the full documentation from the [Nacos website](https://nacos.io/en-us/docs/what-is-nacos.html).
#### Contributing
Contributors are welcomed to join Nacos-sdk-go project. Please check [CONTRIBUTING.md](https://github.com/nacos-group/nacos-sdk-go/blob/v1.1.4/CONTRIBUTING.md) about how to contribute to this project.
#### Contact
* Join us from DingDing Group(23191211).
* [Gitter](https://gitter.im/alibaba/nacos): Nacos's IM tool for community messaging, collaboration and discovery.
* [Twitter](https://twitter.com/nacos2): Follow along for latest nacos news on Twitter.
* [Weibo](https://weibo.com/u/6574374908): Follow along for latest nacos news on Weibo (Twitter of China version).
* [Nacos SegmentFault](https://segmentfault.com/t/nacos): Get the latest notice and prompt help from SegmentFault.
* Email Group:
+ [<EMAIL>](mailto:<EMAIL>): Nacos usage general discussion.
+ [<EMAIL>](mailto:<EMAIL>): Nacos developer discussion (APIs, feature design, etc).
+ [<EMAIL>](mailto:<EMAIL>): Commits notice, very high frequency.
None |
ProbYX | cran | R | Package ‘ProbYX’
October 12, 2022
Type Package
Title Inference for the Stress-Strength Model R = P(Y<X)
Version 1.1-0.1
Date 2015-12-17
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Depends R (>= 3.0-0), rootSolve
Description Confidence intervals and point estimation for R under various parametric model assump-
tions; likelihood inference based on classical first-order approximations and higher-order asymp-
totic procedures.
License GPL-2
LazyLoad yes
Imports stats, graphics
NeedsCompilation no
Repository CRAN
Date/Publication 2022-06-21 08:57:52 UTC
R topics documented:
ProbYX-packag... 2
logli... 3
MLE... 4
Pro... 6
ROC.plo... 7
r... 9
rpsta... 10
wal... 12
ProbYX-package Inference on the stress-strength model R = P(Y<X)
Description
Compute confidence intervals and point estimates for R, under parametric model assumptions for Y
and X. Y and X are two independent continuous random variables from two different populations.
Details
Package: ProbYX
Type: Package
Version: 1.1
Date: 2012-03-20
License: GPL-2
LazyLoad: yes
The package can be used for computing accurate confidence intervals and point estimates for
the stress-strength (reliability) model R = P(Y<X); maximum likelihood estimates, Wald statis-
tic, signed log-likelihood ratio statistic and its modified version ca be computed.
The main function is Prob, which evaluates confidence intervals and point estimates under different
approaches and parametric assumptions.
Author(s)
<NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>. (2013). Accurate higher-order likelihood inference on P(Y<X). Computa-
tional Statistics, 28:1035-1059.
<NAME>, <NAME>, <NAME>. (2003). The Stress-Strength Model and its Generalizations. Theory
and Applications. World Scientific, Singapore.
Examples
# data from the first population
Y <- rnorm(15, mean=5, sd=1)
# data from the second population
X <- rnorm(10, mean=7, sd=1.5)
level <- 0.01 # \eqn{\alpha} level
# estimate and confidence interval under the assumption of two
# normal variables with different variances.
Prob(Y, X, "norm_DV", "RPstar", level)
# method has to be set equal to "RPstar".
loglik Log-likelihood of the bivariate distribution of (Y,X)
Description
Computation of the log-likelihood function of the bivariate distribution (Y,X). The log-likelihood is
reparametrized with the parameter of interest ψ, corresponding to the quantity R, and the nuisance
parameter λ.
Usage
loglik(ydat, xdat, lambda, psi, distr = "exp")
Arguments
ydat data vector of the sample measurements from Y.
xdat data vector of the sample measurements from X.
lambda nuisance parameter vector, λ. Values can be determined from the reparameteri-
sation of the original parameters of the bivariate distribution chosen in distr.
psi scalar parameter of interest, ψ, for the probability R. Value can be determined
from the reparameterisation of the original parameters of the bivariate distribu-
tion chosen in distr.
distr character string specifying the type of distribution assumed for X1 and X2 . Pos-
sible choices for distr are "exp" (default) for the one-parameter exponential,
"norm_EV" and "norm_DV" for the Gaussian distribution with, respectively,
equal or unequal variances assumed for the two random variables.
Details
For further information on the random variables Y and X, see help on Prob.
Reparameterisation in order to determine ψ and λ depends on the assumed distribution. Here the
following relashonships have been used:
α
Exponential models: ψ = (α+β) and λ = α + β, with Y ∼ eα and X ∼ eβ ;
√
Gaussian models with equal variances: ψ = Φ µ√2 −µ 2σ 2
with Y ∼ N (µ1 , σ 2 ) and X ∼ N (µ2 , σ 2 );
Gaussian models with unequal variances: ψ = Φ √µ2 −µ 1
with Y ∼ N (µ1 , σ12 ) and X ∼ N (µ2 , σ22 ).
The Standard Normal cumulative distribution function is indicated with Φ.
Value
Value of the log-likelihood function computed in ψ =psi and λ =lambda.
Author(s)
<NAME>
References
<NAME>., <NAME>. (2013). Accurate higher-order likelihood inference on P (Y < X). Compu-
tational Statistics, 28:1035-1059.
See Also
MLEs
Examples
# data from the first population
Y <- rnorm(15, mean=5, sd=1)
# data from the second population
X <- rnorm(10, mean=7, sd=1)
mu1 <- 5
mu2 <- 7
sigma <- 1
# parameter of interest, the R probability
interest <- pnorm((mu2-mu1)/(sigma*sqrt(2)))
# nuisance parameters
nuisance <- c(mu1/(sigma*sqrt(2)), sigma*sqrt(2))
# log-likelihood value
loglik(Y, X, nuisance, interest, "norm_EV")
MLEs Maximum likelihood estimates of the stress-strength model R =
P(Y<X).
Description
Compute maximum likelihood estimates of R, considered as the parameter of interest. Maximum
likelihood estimates of the nuisance parameter are also supplied.
Usage
MLEs(ydat, xdat, distr)
Arguments
ydat data vector of the sample measurements from Y.
xdat data vector of the sample measurements from X.
distr character string specifying the type of distribution assumed for Y and X. Pos-
sible choices for distr are "exp" (default) for the one-parameter exponential,
"norm_EV" and "norm_DV" for the Gaussian distribution with, respectively,
equal or unequal variances assumed for the two random variables.
Details
The two independent random variables Y and X with given distribution distr are measurements
of a certain characteristics on two different populations. For the relationship of the parameter of
interest (R) and nuisance parameters with the original parameters of distr, look at the details in
loglik.
Value
Vector of estimetes of the nuisance parameters and the R quantity (parameter of interest), respec-
tively.
Author(s)
<NAME>
References
<NAME>, <NAME>, <NAME>. (2003). The Stress-Strength Model and its Generalizations. Theory
and Applications. World Scientific, Singapore.
See Also
loglik, Prob
Examples
# data from the first population
Y <- rnorm(15, mean=5, sd=1)
# data from the second population
X <- rnorm(10, mean=7, sd=1.5)
# vector of MLEs for the nuisance parameters and the quantity R
MLEs(Y, X, "norm_DV")
Prob Estimation of the stress-strength model R = P(Y<X)
Description
Compute confidence intervals and point estimates for the probability R, under parametric model
assumptions for Y and X. Y and X are two independent continuous random variable from two
different populations.
Usage
Prob(ydat, xdat, distr = "exp", method = "RPstar", level = 0.05)
Arguments
ydat data vector of the sample measurements from Y.
xdat data vector of the sample measurements from X.
distr character string specifying the type of distribution assumed for Y and X. Pos-
sible choices for distr are "exp" (default) for the one-parameter exponential,
"norm_EV" and "norm_DV" for the Gaussian distribution with, respectively,
equal or unequal variances assumed for the two random variables.
method character string specifying the methodological approach used for inference (con-
fidence intervals and point estimates) on the AUC. The argument method can be
set equal to "Wald", "RP" or RPstar" (default), according as inference is based
on the Wald statistic, the signed log-likelihood ratio statistic (directed likelih-
hod, rp ) or the modified signed log-likelihood ratio statistic (modified directed
likelihood, rp∗ ), respectively.
level it is the α that supplies the nominal level (1 − α) chosen for the confidence
interval.
Value
PROB Point estimate of R = P (Y < X). This value corresponds to the maxi-
mum likelihoos estimate if method "Wald" or "RP" is chosen; otherwise, when
method "RPstar" is selected, estimate is obtained from the estimating equaltion
rp∗ = 0.
C.Interval Confidence interval of R at confidence level (1 − α).
Author(s)
<NAME>
References
<NAME>., <NAME>. (2013). Accurate higher-order likelihood inference on R = P (Y < X).
Computational Statistics, 28:1035-1059.
See Also
wald, rp, rpstar
Examples
# data from the first population
Y <- rnorm(15, mean=5, sd=1)
# data from the second population
X <- rnorm(10, mean=7, sd=1.5)
level <- 0.01 ## \eqn{\alpha} level
# estimate and confidence interval under the assumption of two
# normal variables with different variances.
Prob(Y, X, "norm_DV", "RPstar", level)
# method has to be set equal to "RPstar".
ROC.plot Estimated ROC curves
Description
Plot of ROC curves estimated under parametric model assumptions on the continuous diagnostic
marker.
Usage
ROC.plot(ydat, xdat, distr = "exp", method = "RPstar", mc = 1)
Arguments
ydat data vector of the diagnostic marker measurements on the sample of non-diseased
individuals (from Y).
xdat data vector of the diagnostic marker measurements on the sample of diseased
individuals (from X).
distr character string specifying the type of distribution assumed for Y and X. Pos-
sible choices for distr are "exp" (default) for the one-parameter exponential,
"norm_EV" and "norm_DV" for the Gaussian distribution with, respectively,
equal or unequal variances assumed for the two random variables.
method character string specifying the methodological approach used for estimating the
probability R, which is here interpreted as the area under the ROC curve (AUC).
The argument method can be set equal to "Wald", "RP" or RPstar" (default),
according as inference is based on the Wald statistic, the signed log-likelihood
ratio statistic (directed likelihhod, rp ) or the modified signed log-likelihood ra-
tio statistic (modified directed likelihood, rp∗ ), respectively. For estimating the
ROC curve parametrically, methods "Wald" and "RP" are equivalent and supply
maximum likelihood estimation (MLE), whereas, by using method "RPstar",
estimate of the ROC curve is based on the modified signed log-likelihood ratio
statistic (rp∗ ). See rpstar for details on this statistic.
mc a numeric value indicating single or multiple plots in the same figure. In case mc
is equal to 1 (default), only the method specified in method is applied and the
corresponding estimated ROC curve is plotted. If mc is different from 1, both
MLE and rp∗ -based methods are applied, and two differently estimated ROC
curves are plotted.
Details
If mc is different from 1, method does not need to be specified.
Value
Plot of ROC curves
Note
The two independent random variables Y and X with given distribution distr are measurements of
the diagnostic marker on the diseased and non-diseased subjects, respectively.
In "Wald" method, or equivalently "RP" method, MLEs for parameters of the Y and X distributions
are computed and then used to estimate specificity and sensitivity. These measures are evaluated as
P (Y < t) and P (X > t), respectively.
In "RPstar" method, parameters of the Y and X distributions are estimated from the rp∗ -based esti-
mate of the AUC.
Author(s)
<NAME>
References
Cortese G., <NAME>. (2013). Accurate higher-order likelihood inference on P (Y < X). Compu-
tational Statistics, 28:1035-1059.
See Also
Prob
Examples
# data from the non-diseased population
Y <- rnorm(15, mean=5, sd=1)
# data from the diseased population
X <- rnorm(10, mean=7, sd=1.5)
ROC.plot(Y, X, "norm_DV", method = "RP", mc = 2)
rp Signed log-likelihood ratio statistic
Description
Compute the signed log-likelihood ratio statistic (rp ) for a given value of the stress strength R =
P(Y<X), that is the parameter of interest, under given parametric model assumptions.
Usage
rp(ydat, xdat, psi, distr = "exp")
Arguments
ydat data vector of the sample measurements from Y.
xdat data vector of the sample measurements from X.
psi scalar for the parameter of interest. It is the value of R, treated as a parameter
under the parametric model construction.
distr character string specifying the type of distribution assumed for Y and X. Pos-
sible choices for distr are "exp" (default) for the one-parameter exponential,
"norm_EV" and "norm_DV" for the Gaussian distribution with, respectively,
equal or unequal variances assumed for the two random variables.
Details
The two independent random variables Y and X with given distribution distr are measurements of
the diagnostic marker on the diseased and non-diseased subjects, respectively. For the relationship
of the parameter of interest (R) and nuisance parameters with the original parameters of distr, look
at the details in loglik.
Value
Value of the signed log-likelihood ratio statistic rp .
Note
The rp values can be also used for testing statistical hypotheses on the probability R.
Author(s)
<NAME>
References
<NAME>., <NAME>. (2013). Accurate higher-order likelihood inference on P(Y<X). Computa-
tional Statistics, 28:1035-1059.
<NAME>. (2000). Likelihood Methods in Statistics. Oxford University Press, New York.
Brazzale AR., <NAME>., <NAME>. (2007). Applied Asymptotics. Case-Studies in Small Sample
Statistics. Cambridge University Press, Cambridge.
See Also
wald, rpstar, MLEs, Prob
Examples
# data from the first population
Y <- rnorm(15, mean=5, sd=1)
# data from the second population
X <- rnorm(10, mean=7, sd=1.5)
# value of \eqn{r_p} for \code{psi=0.9}
rp(Y, X, 0.9,"norm_DV")
rpstar Modified signed log-likelihood ratio statistic
Description
Compute the modified signed log-likelihood ratio statistic (rp∗ ) for a given value of the stress strength
R = P(Y<X), that is the parameter of interest, under given parametric model assumptions.
Usage
rpstar(ydat, xdat, psi, distr = "exp")
Arguments
ydat data vector of the sample measurements from Y.
xdat data vector of the sample measurements from X.
psi scalar for the parameter of interest. It is the value of R, treated as a parameter
under the parametric model construction.
distr character string specifying the type of distribution assumed for Y and X. Pos-
sible choices for distr are "exp" (default) for the one-parameter exponential,
"norm_EV" and "norm_DV" for the Gaussian distribution with, respectively,
equal or unequal variances assumed for the two random variables.
Details
The two independent random variables Y and X with given distribution distr are measurements
from two different populations. For the relationship of the parameter of interest (R) and nuisance
parameters with the original parameters of distr, look at the details in loglik.
Value
rp Value of the signed log-likelihood ratio statistic rp .
rp_star Value of the modified signed log-likelihood ratio statistic rp∗ .
Note
The statistic rp∗ is a modified version of rp which provides more statistically accurate estimates. The
rp∗ values can be also used for testing statistical hypotheses on the probability R.
Author(s)
<NAME>
References
<NAME>., <NAME>. (2013). Accurate higher-order likelihood inference on P(Y<X). Computa-
tional Statistics, 28:1035-1059.
Severini TA. (2000). Likelihood Methods in Statistics. Oxford University Press, New York.
Brazzale AR., <NAME>., <NAME>. (2007). Applied Asymptotics. Case-Studies in Small Sample
Statistics. Cambridge University Press, Cambridge.
See Also
wald, rp, MLEs, Prob
Examples
# data from the first population
Y <- rnorm(15, mean=5, sd=1)
# data from the second population
X <- rnorm(10, mean=7, sd=1.5)
# value of \eqn{r_p^*} for \code{psi=0.9}
rpstar(Y, X, 0.9,"norm_DV")
# method has be set equal to "RPstar".
wald Wald statistic
Description
Compute the Wald statistic for a given value of the stress-strength R = P(Y<X), that is the parameter
of interest, under given parametric model assumptions.
Usage
wald(ydat, xdat, psi, distr = "exp")
Arguments
ydat data vector of the sample measurements from Y.
xdat data vector of the sample measurements from X.
psi scalar for the parameter of interest. It is the value of the quantity R, treated as a
parameter under the parametric model construction.
distr character string specifying the type of distribution assumed for Y and X. Pos-
sible choices for distr are "exp" (default) for the one-parameter exponential,
"norm_EV" and "norm_DV" for the Gaussian distribution with, respectively,
equal or unequal variances assumed for the two random variables.
Details
The two independent random variables Y and X with given distribution distr are measurements
from two different populations. For the relationship of the parameter of interest (R) and nuisance
parameters with the original parameters of distr, look at the details in loglik.
Value
Wald Value of the Wald statistic for a given psi
Jphat Observed profile Fisher information
Note
Values of the Wald statistic can be also used for testing statistical hypotheses on the probability R.
Author(s)
<NAME>
References
<NAME>., <NAME>. (2013). Accurate higher-order likelihood inference on P(Y<X). Computa-
tional Statistics, 28:1035-1059.
<NAME>., <NAME>., <NAME>. (2007). Applied Asymptotics. Case-Studies in Small Sample
Statistics. Cambridge University Press, Cambridge.
wald 13
See Also
rp, rpstar, MLEs, Prob
Examples
# data from the first population
Y <- rnorm(15, mean=5, sd=1)
# data from the second population
X <- rnorm(10, mean=7, sd=1.5)
# value of Wald for \code{psi=0.9}
wald(Y, X, 0.9,"norm_DV") |
open-spiel | readthedoc | YAML | OpenSpiel documentation
[OpenSpiel](#)
---
Welcome to OpenSpiel’s documentation
===
What is OpenSpiel?[¶](#what-is-openspiel)
---
OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. OpenSpiel also includes tools to analyze learning dynamics and other common evaluation metrics. Games are represented as procedural extensive-form games, with some natural extensions.
**Open Spiel supports**
* Single and multi-player games
* Fully observable (via observations) and imperfect information games (via information states and observations)
* Stochasticity (via explicit chance nodes mostly, even though implicit stochasticity is partially supported)
* n-player normal-form “one-shot” games and (2-player) matrix games
* Sequential and simultaneous move games
* Zero-sum, general-sum, and cooperative (identical payoff) games
**Multi-language support**
* C++17
* Python 3
The games and utility functions (e.g. exploitability computation) are written in C++. These are also available using
[pybind11](https://pybind11.readthedocs.io/en/stable/) Python bindings.
The methods names are in `CamelCase` in C++ and `snake_case` in Python (e.g.
`state.ApplyAction` in C++ will be `state.apply_action` in Python). See the pybind11 definition in [open_spiel/python/pybind11/pyspiel.cc](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/pybind11/pyspiel.cc)
for the full mapping between names.
For algorithms, many are written in both languages, even if some are only available from Python.
**Platforms**
OpenSpiel has been tested on Linux (Ubuntu and Debian), MacOS. There is limited support for on [Windows 10](index.html#document-windows).
**Visualization of games**
There is a basic visualizer based on graphviz, see
[open_spiel/python/examples/treeviz_example.py](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/examples/treeviz_example.py).
There is an interactive viewer for OpenSpiel games called
[SpielViz](https://github.com/michalsustr/spielviz).
Installation[¶](#installation)
---
### Python-only installation via pip[¶](#python-only-installation-via-pip)
If you plan to only use the Python API, then the easiest way to install OpenSpiel is to use pip. On MacOS or Linux, simply run:
```
python3 -m pip install open_spiel
```
The binary distribution is new as of OpenSpiel 1.0.0, and is only supported on x86_64 architectures. If you encounter any problems, you can still install OpenSpiel via pip from source (see below), but please open an issue to let us know about the problem.
#### Python-only installation via pip (from source).[¶](#python-only-installation-via-pip-from-source)
If the binary distribution is not an option, you can also build OpenSpiel via pip from source. CMake, Clang and Python 3 development files are required to build the Python extension. Note that we recommend Clang but g++ >= 9.2 should also work.
E.g. on Ubuntu or Debian:
```
# Check to see if you have the necessary tools for building OpenSpiel:
cmake --version # Must be >= 3.17 clang++ --version # Must be >= 7.0.0 python3-config --help
# If not, run this line to install them.
# On older Linux distros, the package might be called clang-9 or clang-10 sudo apt-get install cmake clang python3-dev
# On older Linux distros, the versions may be too old.
# E.g. on Ubuntu 18.04, there are a few extra steps:
# sudo apt-get install clang-10
# pip3 install cmake # You might need to relogin to get the new CMake version
# export CXX=clang++-10
# Recommended: Install pip dependencies and run under virtualenv.
sudo apt-get install virtualenv python3-virtualenv virtualenv -p python3 venv source venv/bin/activate
# Finally, install OpenSpiel and its dependencies:
python3 -m pip install --upgrade setuptools pip python3 -m pip install --no-binary open_spiel
# To exit the virtual env deactivate
## **IMPORTANT NOTE**. If the build fails, please first make sure you have the
## required versions of the tools above and that you followed the recommended
## option. Then, open an issue: https://github.com/deepmind/open_spiel/issues
```
Note that the build could take several minutes.
On MacOS, you can install the dependencies via `brew install cmake python3`. For clang, you need to install or upgrade XCode and install the command-line developer tools.
### Installation from Source[¶](#installation-from-source)
The instructions here are for Linux and MacOS. For installation on Windows, see
[these separate installation instructions](index.html#document-windows). On Linux, we recommend Ubuntu 22.04, Debian 10, or later versions. On MacOS, we recommend XCode 11 or newer. For the Python API: our tests run using Python versions 3.7 - 3.10. If you encounter any problems on other setups, please let us know by opening an issue.
Currently there are three installation methods:
1. building from the source code and editing `PYTHONPATH`.
2. using `pip install` to build and testing using
[nox](https://nox.thea.codes/en/stable/).
3. installing via [Docker](https://www.docker.com).
### Summary[¶](#summary)
In a nutshell:
```
./install.sh # Needed to run once and when major changes are released.
./open_spiel/scripts/build_and_run_tests.sh # Run this every-time you need to rebuild.
```
1. (Optional) Configure
[Conditional Dependencies](#configuring-conditional-dependencies).
2. Install system packages (e.g. cmake) and download some dependencies. Only needs to be run once or if you enable some new conditional dependencies.
```
./install.sh
```
3. Install your [Python dependencies](#installing-python-dependencies), e.g. in Python 3 using
[`virtualenv`](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/):
```
virtualenv -p python3 venv source venv/bin/activate
```
Use `deactivate` to quit the virtual environment.
`pip` should be installed once and upgraded:
```
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
# Install pip deps as your user. Do not use the system's pip.
python3 get-pip.py pip3 install --upgrade pip pip3 install --upgrade setuptools testresources
```
Additionally, if you intend to use one of the optional Python dependencies (see [open_spiel/scripts/install.sh](https://github.com/deepmind/open_spiel/blob/master/open_spiel/scripts/install.sh)), you must manually install and/or upgrade them, e.g.: `bash pip install --upgrade torch==x.xx.x jax==x.x.x` where `x.xx.x` should be the desired version numbers (which can be found at the link above).
4. This sections differs depending on the installation procedure:
**Building and testing from source**
```
python3 -m pip install -r requirements.txt
./open_spiel/scripts/build_and_run_tests.sh
```
**Building and testing using PIP**
```
python3 -m pip install .
python3 -m pip install nox nox -s tests
```
Optionally, use `pip install -e` to install in
[editable mode](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs),
which will allow you to skip this `pip install` step if you edit any Python source files. If you edit any C++ files, you will have to rerun the install command.
5. Only when building from source:
```
# For the python modules in open_spiel.
export PYTHONPATH=$PYTHONPATH:/<path_to_open_spiel>
# For the Python bindings of Pyspiel export PYTHONPATH=$PYTHONPATH:/<path_to_open_spiel>/build/python
```
to `./venv/bin/activate` or your `~/.bashrc` to be able to import OpenSpiel from anywhere.
To make sure OpenSpiel works on the default configurations, we do use the
`python3` command and not `python` (which still defaults to Python 2 on modern Linux versions).
### Installing via Docker[¶](#installing-via-docker)
Please note that we don’t regularly test the Docker installation. As such, it may not work at any given time. If you encounter a problem, please
[open an issue](https://github.com/deepmind/open_spiel/issues).
Option 1 (Basic, 3.13GB):
```
docker build --target base -t openspiel -f Dockerfile.base .
```
Option 2 (Slim, 2.26GB):
```
docker build --target python-slim -t openspiel -f Dockerfile.base .
```
If you are only interested in developing in Python, use the second image. You can navigate through the runtime of the container (after the build step) with:
```
docker run -it --entrypoint /bin/bash openspiel
```
Finally you can run examples using:
```
docker run openspiel python3 python/examples/matrix_game_example.py docker run openspiel python3 python/examples/example.py
```
Option 3 (Jupyter Notebook):
Installs OpenSpiel with an additional Jupyter Notebook environment.
```
docker build -t openspiel-notebook -f Dockerfile.jupyter --rm .
docker run -it --rm -p 8888:8888 openspiel-notebook
```
*More info*: https://jupyter-docker-stacks.readthedocs.io/en/latest/
### Running the first examples[¶](#running-the-first-examples)
In the `build` directory, running `examples/example` will prints out a list of registered games and the usage. Now, let’s play game of Tic-Tac-Toe with uniform random players:
```
examples/example --game=tic_tac_toe
```
Once the proper Python paths are set, from the main directory (one above
`build`), try these out:
```
# Similar to the C++ example:
python3 open_spiel/python/examples/example.py --game=breakthrough
# Play a game against a random or MCTS bot:
python3 open_spiel/python/examples/mcts.py --game=tic_tac_toe --player1=human --player2=random python3 open_spiel/python/examples/mcts.py --game=tic_tac_toe --player1=human --player2=mcts
```
### Detailed steps[¶](#detailed-steps)
#### Configuring conditional dependencies[¶](#configuring-conditional-dependencies)
Conditional dependencies are configured using environment variables, e.g.
```
export OPEN_SPIEL_BUILD_WITH_HANABI=ON
```
`install.sh` may need to be rerun after enabling new conditional dependencies.
See [open_spiel/scripts/global_variables.sh](https://github.com/deepmind/open_spiel/blob/master/open_spiel/scripts/global_variables.sh) for the full list of conditional dependencies.
See also the [Developer Guide](developer_guide.md#conditional-dependencies).
#### Installing system-wide dependencies[¶](#installing-system-wide-dependencies)
See [open_spiel/scripts/install.sh](https://github.com/deepmind/open_spiel/blob/master/open_spiel/scripts/install.sh) for the required packages and cloned repositories.
#### Installing Python dependencies[¶](#installing-python-dependencies)
Using a `virtualenv` to install python dependencies is highly recommended. For more information see:
<https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/##### Required dependencies[¶](#required-dependencies)
Install required dependencies (Python 3):
```
# Ubuntu 22.04 and newer:
python3 -m venv ./venv source venv/bin/activate python3 -m pip install -r requirements.txt
# Older than Ubuntu 22.04:
virtualenv -p python3 venv source venv/bin/activate python3 -m pip install -r requirements.txt
```
Alternatively, although not recommended, you can install the Python dependencies system-wide with:
```
python3 -m pip install --upgrade -r requirements.txt
```
##### Optional dependencies[¶](#optional-dependencies)
Additionally, if you intend to use one of the optional Python dependencies (see [open_spiel/scripts/install.sh](https://github.com/deepmind/open_spiel/blob/master/open_spiel/scripts/install.sh)), you must manually install and/or upgrade them. The installation scripts will not install or upgrade these dependencies. e.g.:
```
python3 -m pip install --upgrade torch==x.xx.x jax==x.x.x
```
where `x.xx.x` should be the desired version numbers (which can be found at the link above).
#### Building and running tests[¶](#building-and-running-tests)
Make sure that the virtual environment is still activated.
By default, Clang C++ compiler is used (and potentially installed by
[open_spiel/scripts/install.sh](https://github.com/deepmind/open_spiel/blob/master/open_spiel/scripts/install.sh)).
Build and run tests (Python 3):
```
mkdir build cd build CXX=clang++ cmake -DPython3_EXECUTABLE=$(which python3) -DCMAKE_CXX_COMPILER=${CXX} ../open_spiel make -j$(nproc)
ctest -j$(nproc)
```
The CMake variable `Python3_EXECUTABLE` is used to specify the Python interpreter. If the variable is not set, CMake’s FindPython3 module will prefer the latest version installed. Note, Python >= 3.7 is required.
One can run an example of a game running (in the `build/` folder):
```
./examples/example --game=tic_tac_toe
```
#### Setting Your PYTHONPATH environment variable[¶](#setting-your-pythonpath-environment-variable)
To be able to import the Python code (both the C++ binding `pyspiel` and the rest) from any location, you will need to add to your PYTHONPATH the root directory and the `open_spiel` directory.
When using a virtualenv, the following should be added to
`<virtualenv>/bin/activate`. For a system-wide install, ddd it in your `.bashrc`
or `.profile`.
```
# For the python modules in open_spiel.
export PYTHONPATH=$PYTHONPATH:/<path_to_open_spiel>
# For the Python bindings of Pyspiel export PYTHONPATH=$PYTHONPATH:/<path_to_open_spiel>/build/python
```
First examples[¶](#first-examples)
---
One can run an example of a game running (in the `build/` folder):
```
./examples/example --game=tic_tac_toe
```
Similar examples using the Python API (run from one above `build`):
```
# Similar to the C++ example:
python3 open_spiel/python/examples/example.py --game=breakthrough
# Play a game against a random or MCTS bot:
python3 open_spiel/python/examples/mcts.py --game=tic_tac_toe --player1=human --player2=random python3 open_spiel/python/examples/mcts.py --game=tic_tac_toe --player1=human --player2=mcts
```
Concepts[¶](#concepts)
---
The following documentation describes the high-level concepts. Refer to the code comments for specific API descriptions.
Note that, in English, the word “game” is used for both the description of the rules (e.g. the game of chess) and for a specific instance of a playthrough
(e.g. “we played a game of chess yesterday”). We will be using “playthrough” or
“trajectory” to refer to the second concept.
The methods names are in `CamelCase` in C++ and `snake_case` in Python without any other difference (e.g. `state.ApplyAction` in C++ will be
`state.apply_action` in Python).
### The tree representation[¶](#the-tree-representation)
There are mainly 2 concepts to know about (defined in
[open_spiel/spiel.h](https://github.com/deepmind/open_spiel/blob/master/open_spiel/spiel.h)):
* A `Game` object contains the high level description for a game (e.g. whether it is simultaneous or sequential, the number of players, the maximum and minimum scores).
* A `State`, which describe a specifics point (e.g. a specific board position in chess, a specific set of player cards, public cards and past bets in Poker) within a trajectory.
All possible trajectories in a game are represented as a tree. In this tree, a node is a `State` and is associated to a specific history of moves for all players. Transitions are actions taken by players (in case of a simultaneous node, the transition is composed of the actions for all players).
Note that in most games, we deal with chance (i.e. any source of randomness)
using a an explicit player (the “chance” player, which has id
`kChancePlayerId`). For example, in Poker, the root state would just be the players without any cards, and the first transitions will be chance nodes to deal the cards to the players (in practice once card is dealt per transition).
See `spiel.h` for the full API description. For example,
`game.NewInitialState()` will return the root `State`. Then,
`state.LegalActions()` can be used to get the possible legal actions and
`state.ApplyAction(action)` can be used to update `state` in place to play the given `action` (use `state.Child(action)` to create a new state and apply the action to it).
Loading a game[¶](#loading-a-game)
---
The games are all implemented in C++ in [open_spiel/games](https://github.com/deepmind/open_spiel/blob/master/open_spiel/games).
Available games names can be listed using `RegisteredNames()`.
A game can be created from its name and its arguments (which usually have defaults). There are 2 ways to create a game:
* Using the game name and a structured `GameParameters` object (which, in Python, is a dictionary from argument name to compatible types (int, bool,
str or a further dict). e.g. `{"players": 3}` with `LoadGame`.
* Using a string representation such as `kuhn_poker(players=3)`, giving
`LoadGame(kuhn_poker(players=3))`. See `open_spiel/game_parameters.cc` for the exact syntax.
### Creating sequential games from simultaneous games[¶](#creating-sequential-games-from-simultaneous-games)
It is possible to apply generic game transformations (see
[open_spiel/game_transforms/](https://github.com/deepmind/open_spiel/blob/master/open_spiel/game_transforms/)) such as loading an `n`-players simultaneous games into an equivalent turn-based game where simultaneous moves are encoded as `n` turns.
One can use `LoadGameAsTurnBased(game)`, or use the string representation, such as
`turn_based_simultaneous_game(game=goofspiel(imp_info=True,num_cards=4,points_order=descending))`.
Playing a trajectory[¶](#playing-a-trajectory)
---
Here are for example the Python code to play one trajectory:
```
import random import pyspiel import numpy as np
game = pyspiel.load_game("kuhn_poker")
state = game.new_initial_state()
while not state.is_terminal():
legal_actions = state.legal_actions()
if state.is_chance_node():
# Sample a chance event outcome.
outcomes_with_probs = state.chance_outcomes()
action_list, prob_list = zip(*outcomes_with_probs)
action = np.random.choice(action_list, p=prob_list)
state.apply_action(action)
else:
# The algorithm can pick an action based on an observation (fully observable
# games) or an information state (information available for that player)
# We arbitrarily select the first available action as an example.
action = legal_actions[0]
state.apply_action(action)
```
See [open_spiel/python/examples/example.py](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/examples/example.py) for a more thorough example that covers more use of the core API.
See [open_spiel/python/examples/playthrough.py](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/examples/playthrough.py) (and
[open_spiel/python/algorithms/generate_playthrough.py](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/algorithms/generate_playthrough.py)) for an richer example generating a playthrough and printing all available information.
In C++, see [open_spiel/examples/example.cc](https://github.com/deepmind/open_spiel/blob/master/open_spiel/examples/example.cc) which generates random trajectories.
OpenSpiel Core API Reference[¶](#openspiel-core-api-reference)
---
OpenSpiel consists of several core functions and classes. This page acts as a helpful reminder of how to use the main functionality of OpenSpiel.
Most of the functions are described and illustrated via Python syntax and examples, and there are pointers to the corresponding C++ functions.
**Disclaimer**: This is meant as a guide to facilitate OpenSpiel development in Python. However,
[spiel.h](https://github.com/deepmind/open_spiel/blob/master/open_spiel/spiel.h)
remains the single source of truth for documentation on the core API.
### Core Functions[¶](#core-functions)
| Method | Python | C++ | Description |
| --- | --- | --- | --- |
| `deserialize_game_and_state(serialized_data: string)` | [Python](api_reference/game_deserialize_game_and_state.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L1127) | Returns a tuple of (game, state) reconstructed from the serialized object data. |
| `load_game(game_string: str)` | [Python](api_reference/load_game.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L1080) | Returns a game object for the specified game string. |
| `load_game(game_string: str, parameters: Dict[str, Any])` | [Python](api_reference/load_game.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L1083) | Returns a game object for the specified game string and parameter values. |
| `registered_names()` | [Python](api_reference/registered_names.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L1051) | Returns a list of all short names of games in the library. |
| `serialize_game_and_state(game: pyspiel.Game, state: pyspiel.State)` | [Python](api_reference/game_serialize_game_and_state.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L1104) | Returns a string representation of the state and game that created it. |
### State methods[¶](#state-methods)
| Method | Python | C++ | Description |
| --- | --- | --- | --- |
| `action_to_string(player: int, action: int)` | [Python](api_reference/state_action_to_string.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L289) | Returns a string representation of the specified player's action. |
| `apply_action(action: int)` | [Python](api_reference/state_apply_action.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L230) | Applies the specified action to the state. |
| `apply_actions(actions: List[int])` | [Python](api_reference/state_apply_actions.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L581) | Applies the specified joint action (action for each player) to the state. |
| `chance_outcomes()` | [Python](api_reference/state_chance_outcomes.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L604) | Returns the a list of (action, prob) tuples representing the chance outcome distribution. |
| `current_player()` | [Python](api_reference/state_current_player.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L225) | Returns the player ID of the acting player. |
| `history()` | [Python](api_reference/state_history.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L406) | Returns the sequence of actions taken by all players since the start of the game. |
| `information_state_string()` | [Python](api_reference/state_information_state_string.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L433) | Returns a string representing the information state for the current player. |
| `information_state_string(player: int)` | [Python](api_reference/state_information_state_string.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L433) | Returns a string representing the information state for the specified player. |
| `information_state_tensor()` | [Python](api_reference/state_information_state_tensor.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L488) | Returns a list of floats representing the information state for the current player. |
| `information_state_tensor(player: int)` | [Python](api_reference/state_information_state_tensor.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L488) | Returns a list of floats representing the information state for the specified player. |
| `is_chance_node()` | [Python](api_reference/state_is_chance_node.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L368) | Returns True if the state represents a chance node, False otherwise. |
| `is_simultaneous_node()` | [Python](api_reference/state_is_simultaneous_node.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L385) | Returns True if the state represents a simultaneous player node, False otherwise. |
| `is_terminal()` | [Python](api_reference/state_is_terminal.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L322) | Returns True if the state is terminal (game has finished), False otherwise. |
| `legal_actions()` | [Python](api_reference/state_legal_actions.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L263) | Returns the list of legal actions for the current player. |
| `legal_actions(player: int)` | [Python](api_reference/state_legal_actions.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L245) | Returns the list of legal actions for the specified player. |
| `observation_string()` | [Python](api_reference/state_observation_string.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L516) | Returns a string representing the observation for the current player. |
| `observation_string(player: int)` | [Python](api_reference/state_observation_string.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L516) | Returns a string representing the observation for the specified player. |
| `observation_tensor()` | [Python](api_reference/state_observation_tensor.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L547) | Returns a list of floats representing the observation for the current player. |
| `observation_tensor(player: int)` | [Python](api_reference/state_observation_tensor.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L547) | Returns a list of floats representing the observation for the specified player. |
| `returns()` | [Python](api_reference/state_returns.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L346) | Returns the list of returns (cumulated reward from the start of the game): one value per player. |
| `rewards()` | [Python](api_reference/state_rewards.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L325) | Returns the list of intermediate rewards (rewards obtained since the last time the player acted): one value per player. |
| `serialize()` | [Python](api_reference/state_serialize.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L636) | Returns a string representation of the state which can be used to reconstruct the state from the game. |
### Game methods[¶](#game-methods)
| Method | Python | C++ | Description |
| --- | --- | --- | --- |
| `action_to_string(player: int, action: int)` | [Python](api_reference/game_action_to_string.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L946) | Returns a (state-independent) string representation of the specified player's action. |
| `deserialize_state(serialized_data: str)` | [Python](api_reference/game_deserialize_state.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L863) | Reconstructs the state from the serialized state string. |
| `information_state_tensor_shape()` | [Python](api_reference/game_information_state_tensor_shape_size.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L815) | Shape that the information state tensor should be perceived as. |
| `information_state_tensor_size()` | [Python](api_reference/game_information_state_tensor_shape_size.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L827) | Size of the list (number of values) returned by the state's information state tensor function. |
| `max_chance_outcomes()` | [Python](api_reference/game_max_chance_outcomes.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L778) | The maximum number of distinct chance outcomes for chance nodes in the game. |
| `max_game_length()` | [Python](api_reference/game_max_game_length.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L873) | The maximum length of any one game (in terms of number of decision nodes visited in the game tree). |
| `max_utility()` | [Python](api_reference/game_max_min_utility.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L795) | The maximum achievable utility (return) in over any playing (episode) of the game. |
| `min_utility()` | [Python](api_reference/game_max_min_utility.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L795) | The minimum achievable utility (return) in over any playing (episode) of the game. |
| `new_initial_state()` | [Python](api_reference/game_new_initial_state.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L764) | Returns a new initial state of the game (note: which might be a chance node). |
| `num_distinct_actions()` | [Python](api_reference/game_num_distinct_actions.html) | [C++](https://github.com/deepmind/open_spiel/blob/c6fafb92021a8a3aa5f9746cdb79e74917ed26a5/open_spiel/spiel.h#L752) | Returns the number of (state-independent) distinct actions in the game. |
| `observation_tensor_shape()` | [Python](api_reference/game_observation_tensor_shape_size.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L835) | Shape that the observation tensor should be perceived as. |
| `observation_tensor_size()` | [Python](api_reference/game_observation_tensor_shape_size.html) | [C++](https://github.com/deepmind/open_spiel/blob/89ba2264a66d9db299108fbd2de4a27b71973f54/open_spiel/spiel.h#L847) | Size of the list (number of values) returned by the state's observation tensor function. |
Available algorithms[¶](#available-algorithms)
---
: thoroughly-tested. In many cases,
we verified against known values and/or reproduced results from papers.
**~**: implemented but lightly tested.
**X**: known problems; please see github issues.
| Algorithms | Category | Reference | Status |
| --- | --- | --- | --- |
| Information Set Monte Carlo Tree Search (IS-MCTS) | Search | [Cowley et al. '12](https://ieeexplore.ieee.org/abstract/document/6203567) | **~** |
| Minimax (and Alpha-Beta) Search | Search | [Wikipedia1](https://en.wikipedia.org/wiki/Minimax#Minimax_algorithm_with_alternate_moves), [Wikipedia2](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning), Knuth and Moore '75 | |
| Monte Carlo Tree Search | Search | [Wikipedia](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search), [UCT paper](http://ggp.stanford.edu/readings/uct.pdf), [Coulom '06](https://hal.inria.fr/inria-00116992/document), [Cowling et al. survey](http://www.incompleteideas.net/609%20dropbox/other%20readings%20and%20resources/MCTS-survey.pdf) | |
| Lemke-Howson (via nashpy) | Opt. | [Wikipedia](https://en.wikipedia.org/wiki/Lemke%E2%80%93Howson_algorithm), [Shoham & Leyton-Brown '09](http://masfoundations.org/) | |
| ADIDAS | Opt. | [Gemp et al '22](https://arxiv.org/abs/2106.01285) | **~** |
| Sequence-form linear programming | Opt. | [Koller, Megiddo, and von Stengel '94](http://theory.stanford.edu/~megiddo/pdf/stoc94.pdf), [Shoham & Leyton-Brown '09](http://masfoundations.org/) | |
| Stackelberg equilibrium solver | Opt. | [Conitzer & Sandholm '06](https://users.cs.duke.edu/~conitzer/commitEC06.pdf) | **~** |
| MIP-Nash | Opt. | [Sandholm et al. '05](https://dl.acm.org/doi/10.5555/1619410.1619413) | **~** |
| Magnetic Mirror Descent (MMD) with dilated entropy | Opt. | [Sokota et al. '22](https://arxiv.org/abs/2206.05825) | **~** |
| Counterfactual Regret Minimization (CFR) | Tabular | [Zinkevich et al '08](https://poker.cs.ualberta.ca/publications/NIPS07-cfr.pdf), [Neller & Lanctot '13](http://modelai.gettysburg.edu/2013/cfr/cfr.pdf) | |
| CFR against a best responder (CFR-BR) | Tabular | [Johanson et al '12](https://poker.cs.ualberta.ca/publications/AAAI12-cfrbr.pdf) | |
| Exploitability / Best response | Tabular | [Shoham & Leyton-Brown '09](http://masfoundations.org/) | |
| External sampling Monte Carlo CFR | Tabular | [Lanctot et al. '09](http://mlanctot.info/files/papers/nips09mccfr.pdf), [Lanctot '13](http://mlanctot.info/files/papers/PhD_Thesis_MarcLanctot.pdf) | |
| Fixed Strategy Iteration CFR (FSICFR) | Tabular | [Neller & Hnath '11](https://cupola.gettysburg.edu/csfac/2/) | **~** |
| Mean-field Ficticious Play for MFG | Tabular | [Perrin et. al. '20](https://arxiv.org/abs/2007.03458) | **~** |
| Online Mirror Descent for MFG | Tabular | [Perolat et. al. '21](https://arxiv.org/abs/2103.00623) | **~** |
| Munchausen Online Mirror Descent for MFG | Tabular | [Lauriere et. al. '22](https://arxiv.org/pdf/2203.11973) | **~** |
| Fixed Point for MFG | Tabular | [Huang et. al. '06](https://zbmath.org/?q=an:1136.91349) | **~** |
| Boltzmann Policy Iteration for MFG | Tabular | [Lauriere et. al. '22](https://arxiv.org/pdf/2203.11973) | **~** |
| Outcome sampling Monte Carlo CFR | Tabular | [Lanctot et al. '09](http://mlanctot.info/files/papers/nips09mccfr.pdf), [Lanctot '13](http://mlanctot.info/files/papers/PhD_Thesis_MarcLanctot.pdf) | |
| Policy Iteration | Tabular | [Sutton & Barto '18](http://incompleteideas.net/book/the-book-2nd.html) | |
| Q-learning | Tabular | [Sutton & Barto '18](http://incompleteideas.net/book/the-book-2nd.html) | |
| Regret Matching | Tabular | [Hart & Mas-Colell '00](https://onlinelibrary.wiley.com/doi/abs/10.1111/1468-0262.00153) | |
| Restricted Nash Response (RNR) | Tabular | [Johanson et al '08](http://johanson.ca/publications/poker/2007-nips-rnash/2007-nips-rnash.html) | **~** |
| SARSA | Tabular | [Sutton & Barto '18](http://incompleteideas.net/book/the-book-2nd.html) | |
| Value Iteration | Tabular | [Sutton & Barto '18](http://incompleteideas.net/book/the-book-2nd.html) | |
| Advantage Actor-Critic (A2C) | RL | [Mnih et al. '16](https://arxiv.org/abs/1602.01783) | |
| Deep Q-networks (DQN) | RL | [Mnih et al. '15](https://www.nature.com/articles/nature14236) | |
| Ephemeral Value Adjustments (EVA) | RL | [Hansen et al. '18](https://arxiv.org/abs/1810.08163) | **~** |
| Proximal Policy Optimization (PPO) | RL | [Schulman et al. '18](https://arxiv.org/abs/1707.06347) | **~** |
| AlphaZero (C++/LibTorch) | MARL | [Silver et al. '18](https://science.sciencemag.org/content/362/6419/1140) | |
| AlphaZero (Python/TF) | MARL | [Silver et al. '18](https://science.sciencemag.org/content/362/6419/1140) | |
| Correlated Q-Learning | MARL | [Greenwald & Hall '03](https://www.aaai.org/Papers/ICML/2003/ICML03-034.pdf) | **~** |
| Asymmetric Q-Learning | MARL | [Kononen '04](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.9458&rep=rep1&type=pdf) | **~** |
| Deep CFR | MARL | [Brown et al. '18](https://arxiv.org/abs/1811.00164) | |
| DiCE: The Infinitely Differentiable Monte-Carlo Estimator (LOLA-DiCE) | MARL | [<NAME>, Al-Shedivat et al. '18](http://proceedings.mlr.press/v80/foerster18a/foerster18a.pdf) | **~** |
| Exploitability Descent (ED) | MARL | [Lockhart et al. '19](https://arxiv.org/abs/1903.05614) | |
| (Extensive-form) Fictitious Play (XFP) | MARL | [Heinrich, Lanctot, & Silver '15](http://proceedings.mlr.press/v37/heinrich15.pdf) | |
| Learning with Opponent-Learning Awareness (LOLA) | MARL | [Foerster, Chen, Al-Shedivat, et al. '18](https://arxiv.org/pdf/1709.04326.pdf) | **~** |
| Nash Q-Learning | MARL | [<NAME> '03](https://www.jmlr.org/papers/volume4/hu03a/hu03a.pdf) | **~** |
| Neural Fictitious Self-Play (NFSP) | MARL | [Heinrich & Silver '16](https://arxiv.org/abs/1603.01121) | |
| Neural Replicator Dynamics (NeuRD) | MARL | [Omidshafiei, <NAME>, et al. '19](https://arxiv.org/abs/1906.00190) | **X** |
| Regret Policy Gradients (RPG, RMPG) | MARL | [Srinivasan, Lanctot, et al. '18](https://arxiv.org/abs/1810.09026) | |
| Policy-Space Response Oracles (PSRO) | MARL | [Lanctot et al. '17](https://arxiv.org/abs/1711.00832) | |
| Q-based ("all-actions") Policy Gradient (QPG) | MARL | [Srinivasan, Lanctot, et al. '18](https://arxiv.org/abs/1810.09026) | |
| Regularized Nash Dynamics (R-NaD) | MARL | [Perolat, De Vylder, et al. '22](https://arxiv.org/abs/2206.15378) | |
| Regression CFR (RCFR) | MARL | [Waugh et al. '15](https://arxiv.org/abs/1411.7974), [Morrill '16](https://poker.cs.ualberta.ca/publications/Morrill_Dustin_R_201603_MSc.pdf) | |
| Rectified Nash Response (PSRO_rn) | MARL | [Balduzzi et al. '19](https://arxiv.org/abs/1901.08106) | **~** |
| Win-or-Learn-Fast Policy-Hill Climbing (WoLF-PHC) | MARL | [Bowling & Veloso '02](https://www.sciencedirect.com/science/article/pii/S0004370202001212) | **~** |
| α-Rank | Eval. / Viz. | [Omidhsafiei et al. '19](https://www.nature.com/articles/s41598-019-45619-9), [arXiv](https://arxiv.org/abs/1903.01373) | |
| Nash Averaging | Eval. / Viz. | [Balduzzi et al. '18](https://arxiv.org/abs/1806.02643) | **~** |
| Replicator / Evolutionary Dynamics | Eval. / Viz. | [Hofbaeur & Sigmund '98](https://www.cambridge.org/core/books/evolutionary-games-and-population-dynamics/A8D94EBE6A16837E7CB3CED24E1948F8), [Sandholm '10](https://mitpress.mit.edu/books/population-games-and-evolutionary-dynamics) | |
Available games[¶](#available-games)
---
: thoroughly-tested. In many cases,
we verified against known values and/or reproduced results from papers.
**~**: implemented but lightly tested.
**X**: known issues (see code for details).
| Status | Game |
| --- | --- |
| **~** | [2048](#2048) |
| **~** | [Amazons](#amazons) |
| **~** | [Atari](#atari) |
| | [Backgammon](#backgammon) |
| **~** | [Bargaining](#bargaining) |
| **~** | [Battleship](#battleship) |
| **~** | [Blackjack](#blackjack) |
| **~** | [Block Dominoes](#block-dominoes) |
| | [Breakthrough](#breakthrough) |
| | [Bridge](#bridge) |
| | [(Uncontested) Bridge bidding](#uncontested-bridge-bidding) |
| **~** | [Catch](#catch) |
| **~** | [Checkers](#checkers) |
| **~** | [Cliff Walking](#cliff-walking) |
| **~** | [Clobber](#clobber) |
| **~** | [Coin Game](#coin-game) |
| **~** | [Colored Trails](#colored-trails) |
| | [Connect Four](#connect-four) |
| **~** | [Cooperative Box-Pushing](#cooperative-box-pushing) |
| | [Chess](#chess) |
| **~** | [Crazy Eights](#crazy-eights) |
| **~** | [Dark Hex](#dark-hex) |
| **~** | [Deep Sea](#deep-sea) |
| **~** | [Dou Dizhu](#dou-dizhu) |
| **~** | [Euchre](#euchre) |
| | [First-price Sealed-Bid Auction](#first-price-sealed-bid-auction) |
| | [<NAME>](#gin-rummy) |
| | [Go](#go) |
| | [Goofspiel](#goofspiel) |
| | [Hanabi](#hanabi) |
| | [Havannah](#havannah) |
| | [Hearts](#hearts) |
| **~** | [Hex](#hex) |
| **~** | [Kriegspiel](#Kriegspiel) |
| | [Kuhn poker](#kuhn-poker) |
| **~** | [Laser Tag](#laser-tag) |
| | [Leduc poker](#leduc-poker) |
| **~** | [Lewis Signaling](#lewis-signaling) |
| | [Liar's Dice](#liars-dice) |
| **~** | [Liar's Poker](#liars-poker) |
| **~** | [Mensch ärgere Dich nicht](#mensch-aergere-dich-nicht) |
| **~** | [Mancala](#mancala) |
| **~** | [Markov Soccer](#markov-soccer) |
| | [Matching Pennies (Three-player)](#matching-pennies-three-player) |
| | [Mean Field Game : garnet](#mean_field_game_garnet) |
| | [Mean Field Game : crowd modelling](#mean_field_game_crowd_modelling) |
| | [Mean Field Game : crowd modelling 2d](#mean_field_game_crowd_modelling_2d) |
| | [Mean Field Game : linear quadratic](#mean-field-game--linear-quadratic) |
| | [Mean Field Game : predator prey](#mean_field_game_predator_prey) |
| | [Mean Field Game : routing](#mean-field-game--routing) |
| **~** | [Morpion Solitaire (4D)](#morpion-solitaire-4d) |
| | [Negotiation](#negotiation) |
| **~** | [Nim](#nim) |
| **~** | [Nine men's morris](#nine_mens_morris) |
| **~** | [Oh Hell](#oh-hell) |
| | [Oshi-Zumo](#oshi-zumo) |
| | [Oware](#oware) |
| **~** | [Pathfinding](#pathfinding) |
| | [Pentago](#pentago) |
| **~** | [Phantom Go](#phantom-go) |
| **~** | [Phantom Tic-Tac-Toe](#phantom-tic-tac-toe) |
| | [Pig](#pig) |
| **~** | [Poker (Hold 'em)](#poker-hold-em) |
| | [Quoridor](#quoridor) |
| **~** | [Reconnaissance Blind Chess](#reconnaissance-blind-chess) |
| | [Routing game](#routing-game) |
| **~** | [Sheriff](#sheriff) |
| **~** | [Slovenian Tarok](#slovenian-tarok) |
| **~** | [Skat (simplified bidding)](#skat-simplified-bidding) |
| **~** | [Solitaire (K+)](#solitaire-k) |
| | [Tic-Tac-Toe](#tic-tac-toe) |
| | [Tiny Bridge](#tiny-bridge) |
| | [Tiny Hanabi](#tiny-hanabi) |
| | [Trade Comm](#trade-comm) |
| **~** | [Ultimate Tic-Tac-Toe](#ultimate-tic-tac-toe) |
| | [Y](#y) |
### Details[¶](#details)
#### 2048[¶](#id1)
* A single player game where player aims to create a 2048 tile by merging other tiles.
* Numbers on a grid.
* Modern game.
* Non-deterministic.
* Perfect information.
* 1 player.
* [Github](https://github.com/gabrielecirulli/2048)
#### Amazons[¶](#amazons)
* Move pieces on a board trying to block opponents from moving.
* Pieces on a grid.
* Modern game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Game_of_the_Amazons)
#### Atari[¶](#atari)
* Agent plays classic games from
[Gym’s Atari Environments](https://www.gymlibrary.dev/environments/atari/),
such as Breakout.
* Single player.
* Most games are non-deterministic.
* Perfect information.
#### Backgammon[¶](#backgammon)
* Players move their pieces through the board based on the rolls of dice.
* Idiosyncratic format.
* Traditional game.
* Non-deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Backgammon)
#### Bargaining[¶](#bargaining)
* Agents negotiate for items in a pool with different (hidden) valuations.
* Research game.
* Non-deterministic (randomized pool and valuations).
* Imperfect information.
* 2 players.
* [Lewis et al. ‘17](https://arxiv.org/abs/1706.05125),
[DeVault et al. ‘15](https://www.aaai.org/ocs/index.php/SSS/SSS15/paper/viewFile/10335/10100)
#### Battleship[¶](#battleship)
* Players place ships and shoot at each other in turns.
* Pieces on a board.
* Traditional game.
* Deterministic.
* Imperfect information.
* 2 players.
* Good for correlated equilibria.
* [Farina et al. ‘19, Correlation in Extensive-Form Games: Saddle-Point Formulation and Benchmarks](https://papers.nips.cc/paper/9122-correlation-in-extensive-form-games-saddle-point-formulation-and-benchmarks.pdf).
Based on the original game
[(wikipedia)](https://en.wikipedia.org/wiki/Battleship_(game))
#### Blackjack[¶](#blackjack)
* Simplified version of blackjack, with only HIT/STAND moves.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 1 player.
* [Wikipedia](https://en.wikipedia.org/wiki/Blackjack)
#### Block Dominoes[¶](#block-dominoes)
* Most simple version of dominoes.
* Consists of 28 tiles, featuring all combinations of spot counts (also called pips or dots) between zero and six.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* [Wikipedia](%5Bhttps://en.wikipedia.org/wiki/Blackjack%5D(https://en.wikipedia.org/wiki/Dominoes#Blocking_game))
#### Breakthrough[¶](#breakthrough)
* Simplified chess using only pawns.
* Pieces on a grid.
* Modern game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Breakthrough_(board_game))
#### Bridge[¶](#bridge)
* A card game where players compete in pairs.
* Card game.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 4 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Contract_bridge)
#### (Uncontested) Bridge bidding[¶](#uncontested-bridge-bidding)
* Players score points by forming specific sets with the cards in their hands.
* Card game.
* Research game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Contract_bridge)
#### Catch[¶](#catch)
* Agent must move horizontally to ‘catch’ a descending ball. Designed to test basic learning.
* Agent on a grid.
* Research game.
* Non-deterministic.
* Perfect information.
* 1 players.
* [Mnih et al. 2014, Recurrent Models of Visual Attention](https://papers.nips.cc/paper/5542-recurrent-models-of-visual-attention.pdf),
[Osband et al ‘19, Behaviour Suite for Reinforcement Learning, Appendix A](https://arxiv.org/abs/1908.03568)
#### Checkers[¶](#checkers)
* Players move pieces around the board with the goal of eliminating the opposing pieces.
* Pieces on a grid.
* Traditional game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Checkers)
#### Cliff Walking[¶](#cliff-walking)
* Agent must find goal without falling off a cliff. Designed to demonstrate exploration-with-danger.
* Agent on a grid.
* Research game.
* Deterministic.
* Perfect information.
* 1 players.
* [Sutton et al. ‘18, page 132](http://www.incompleteideas.net/book/bookdraft2018mar21.pdf)
#### Clobber[¶](#clobber)
* Simplified checkers, where tokens can capture neighbouring tokens. Designed to be amenable to combinatorial analysis.
* Pieces on a grid.
* Research game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Clobber)
#### Coin Game[¶](#coin-game)
* Agents must collect their and their collaborator’s tokens while avoiding a third kind of token. Designed to test divining of collaborator’s intentions
* Agents on a grid.
* Research game.
* Non-deterministic.
* Imperfect information (all players see the grid and their own preferences,
but not the preferences of other players).
* 2 players.
* [Raileanu et al. ‘18, Modeling Others using Oneself in Multi-Agent Reinforcement Learning](https://arxiv.org/abs/1802.09640)
#### Colored Trails[¶](#colored-trails)
* Agents negotiations for chips that they they play on a colored grid to move closer to the goal.
* Agents on a grid.
* Research game.
* Non-deterministic (randomized board & chip configuration).
* Imperfect information.
* 3 players.
* [Ya’akov et al. ‘10](https://dash.harvard.edu/handle/1/4726287),
[Fecici & Pfeffer ‘08](https://dl.acm.org/doi/10.5555/1402383.1402431),
[de Jong et al. ‘11](https://www.ifaamas.org/Proceedings/aamas2011/papers/C4_R57.pdf)
#### Connect Four[¶](#connect-four)
* Players drop tokens into columns to try and form a pattern.
* Tokens on a grid.
* Traditional game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Connect_Four)
#### Cooperative Box-Pushing[¶](#cooperative-box-pushing)
* Agents must collaborate to push a box into the goal. Designed to test collaboration.
* Agents on a grid.
* Research game.
* Deterministic.
* Perfect information.
* 2 players.
* [Seuken & Zilberstein ‘12, Improved Memory-Bounded Dynamic Programming for Decentralized POMDPs](https://arxiv.org/abs/1206.5295)
#### Chess[¶](#chess)
* Players move pieces around the board with the goal of eliminating the opposing pieces.
* Pieces on a grid.
* Traditional game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Chess)
#### Crazy Eights[¶](#crazy-eights)
* A precursor of UNO (see [here](https://www.unorules.org/crazy-eights/)).
* Players try to match the rank or suit of the previous played card.
* Eights are viewed as wild cards.
* In an alternative version, special cards such as skip, reverse, draw-two are permitted.
* Nondeterministic.
* Imperfect information.
* > =2 players.
>
* [Wikipedia](https://en.wikipedia.org/wiki/Crazy_Eights)
#### Dark Hex[¶](#dark-hex)
* Hex, except the opponent’s tokens are hidden. (Imperfect-information version)
* Uses tokens on a hex grid.
* Research game.
* Deterministic.
* Imperfect information.
* 2 players.
#### Deep Sea[¶](#deep-sea)
* Agent must explore to find reward (first version) or penalty (second version). Designed to test exploration.
* Agent on a grid.
* Research game.
* Deterministic.
* Perfect information.
* 1 players.
* [Osband et al. ‘17, Deep Exploration via Randomized Value Functions](https://arxiv.org/abs/1703.07608)
#### Dou Dizhu[¶](#dou-dizhu)
* A three-player games where one player (dizhu) plays against a team of two
(peasants).
* Uses a 54-card deck.
* Non-deterministic.
* Imperfect information.
* Three players.
* [Wikipedia](https://en.wikipedia.org/wiki/Dou_dizhu)
#### Euchre[¶](#euchre)
* Trick-taking card game where players compete in pairs.
* Card game.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 4 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Euchre)
#### First-price Sealed-Bid Auction[¶](#first-price-sealed-bid-auction)
* Agents submit bids simultaneously; highest bid wins, and that’s the price paid.
* Idiosyncratic format.
* Research game.
* Non-deterministic.
* Imperfect, incomplete information.
* 2-10 players.
* [Wikipedia](https://en.wikipedia.org/wiki/First-price_sealed-bid_auction)
#### <NAME>[¶](#gin-rummy)
* Players score points by forming specific sets with the cards in their hands.
* Card game.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Gin_rummy)
#### Go[¶](#go)
* Players place tokens on the board with the goal of encircling territory.
* Tokens on a grid.
* Traditional game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Go_(game))
#### Goofspiel[¶](#goofspiel)
* Players bid with their cards to win other cards.
* Card game.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 2-10 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Goofspiel)
#### Hanabi[¶](#hanabi)
* Players can see only other player’s pieces, and everyone must cooperate to win.
* Idiosyncratic format.
* Modern game.
* Non-deterministic.
* Imperfect information.
* 2-5 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Hanabi_(card_game)) and
[Bard et al. ‘19, The Hanabi Challenge: A New Frontier for AI Research](https://arxiv.org/abs/1902.00506)
* Implemented via
[Hanabi Learning Environment](https://github.com/deepmind/hanabi-learning-environment)
#### Havannah[¶](#havannah)
* Players add tokens to a hex grid to try and form a winning structure.
* Tokens on a hex grid.
* Modern game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Havannah)
#### Hearts[¶](#hearts)
* A card game where players try to avoid playing the highest card in each round.
* Card game.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 3-6 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Hearts_(card_game))
#### Hex[¶](#hex)
* Players add tokens to a hex grid to try and link opposite sides of the board.
* Uses tokens on a hex grid.
* Modern game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Hex_(board_game))
* [Hex, the full story by <NAME> and <NAME>](https://webdocs.cs.ualberta.ca/~hayward/hexbook/hex.html)
#### Kriegspiel[¶](#kriegspiel)
* Chess with opponent’s pieces unknown. Illegal moves have no effect - it remains the same player’s turn until they make a legal move.
* Traditional chess variant, invented by <NAME> in 1899.
* Deterministic.
* Imperfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Kriegspiel_(chess))
* [Monte Carlo tree search in Kriegspiel](https://www.ics.uci.edu/~dechter/courses/ics-295/fall-2019/papers/2010-mtc-aij.pdf)
* [Game-Tree Search with Combinatorially Large Belief States, Parker 2005](https://www.cs.umd.edu/~nau/papers/parker2005game-tree.pdf)
#### Kuhn poker[¶](#kuhn-poker)
* Simplified poker amenable to game-theoretic analysis.
* Cards with bidding.
* Research game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Kuhn_poker)
#### Laser Tag[¶](#laser-tag)
* Agents see a local part of the grid, and attempt to tag each other with beams.
* Agents on a grid.
* Research game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* [Leibo et al. ‘17](https://arxiv.org/abs/1702.03037),
[Lanctot et al. ‘17](https://arxiv.org/abs/1711.00832)
#### Leduc poker[¶](#leduc-poker)
* Simplified poker amenable to game-theoretic analysis.
* Cards with bidding.
* Research game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* [Southey et al. ‘05, Bayes’ bluff: Opponent modelling in poker](https://arxiv.org/abs/1207.1411)
#### Lewis Signaling[¶](#lewis-signaling)
* Receiver must choose an action dependent on the sender’s hidden state.
Designed to demonstrate the use of conventions.
* Idiosyncratic format.
* Research game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Lewis_signaling_game)
#### Liar’s Dice[¶](#liar-s-dice)
* Players bid and bluff on the state of all the dice together, given only the state of their dice.
* Dice with bidding.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Liar%27s_dice)
#### Liar’s Poker[¶](#liar-s-poker)
* Players bid and bluff on the state of all hands, given only the state of their hand.
* Cards with bidding.
* Traditional game.
* Non-deterministic.
* Imperfect information
* 2 or more players.
* [Wikipedia](https://en.wikipedia.org/wiki/Liar%27s_poker)
#### <NAME>[¶](#mensch-aergere-dich-nicht)
* Players roll dice to move their pegs toward their home row while throwing other players’ pegs to the out area.
* Traditional game.
* Non-deterministic.
* Perfect information.
* 2-4 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Mensch_%C3%A4rgere_Dich_nicht)
#### Mancala[¶](#mancala)
* Players take turns sowing beans on the board and try to capture more beans than the opponent.
* Idiosyncratic format.
* Traditional game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Kalah)
#### Markov Soccer[¶](#markov-soccer)
* Agents must take the ball to their goal, and can ‘tackle’ the opponent by predicting their next move.
* Agents on a grid.
* Research game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* [Littman ‘94, Markov games as a framework for multi-agent reinforcement learning](https://www2.cs.duke.edu/courses/spring07/cps296.3/littman94markov.pdf),
[He et al. ‘16, Opponent Modeling in Deep Reinforcement Learning](https://arxiv.org/abs/1609.05559)
#### Matching Pennies (Three-player)[¶](#matching-pennies-three-player)
* Players must predict and match/oppose another player. Designed to have an unstable Nash equilibrium.
* Idiosyncratic format.
* Research game.
* Deterministic.
* Imperfect information.
* 3 players.
* “Three problems in learning mixed-strategy Nash equilibria”
#### Mean Field Game : routing[¶](#mean-field-game-routing)
* Representative player chooses at each node where they go. They has an origin, a destination and a departure time and chooses their route to minimize their travel time. Time spent on each link is a function of the distribution of players on the link when the player reaches the link.
* Network with choice of route.
* Research game.
* Mean-field (with a unique player).
* Explicit stochastic game (only for initial node).
* Perfect information.
* [Cabannes et. al. ‘21, Solving N-player dynamic routing games with congestion: a mean field approach](https://arxiv.org/pdf/2110.11943.pdf).
#### Mean Field Game : Linear-Quadratic[¶](#mean-field-game-linear-quadratic)
* Players are uniformly distributed and are then incentivized to gather at the same point (The lower the distanbce wrt. the distribution mean position, the higher the reward). A mean-reverting term pushes the players towards the distribution, a gaussian noise term perturbs them. The players’ actions alter their states linearly (alpha * a * dt) and the cost thereof is quadratic (K * a^2 * dt), hence the name. There exists an exact, closed form solution for the fully continuous version of this game.
* Research game.
* Mean-field (with a unique player).
* Explicit stochastic game (only for initial node).
* Perfect information.
* [Perrin & al. 2019 (https://arxiv.org/abs/2007.03458)]
#### Morpion Solitaire (4D)[¶](#morpion-solitaire-4d)
* A single player game where player aims to maximize lines drawn on a grid,
under certain limitations.
* Uses tokens on a grid.
* Traditional game.
* Deterministic
* Perfect information.
* 1 player.
* [Wikipedia](https://en.wikipedia.org/wiki/Join_Five)
#### Negotiation[¶](#negotiation)
* Agents with different utilities must negotiate an allocation of resources.
* Idiosyncratic format.
* Research game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* [Lewis et al. ‘17](https://arxiv.org/abs/1706.05125),
[Cao et al. ‘18](https://arxiv.org/abs/1804.03980)
#### Nim[¶](#nim)
* Two agents take objects from distinct piles trying to either avoid taking the last one or take it. Any positive number of objects can be taken on each turn given they all come from the same pile.
* Traditional mathematical game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Nim)
#### Nine men’s morris[¶](#nine-men-s-morris)
* Two players put and move stones on the board to try to form mills (three adjacent stones in a line) to capture the other player’s stones.
* Traditional game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Nine_men%27s_morris)
#### Oh Hell[¶](#oh-hell)
* A card game where players try to win exactly a declared number of tricks.
* Card game.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 3-7 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Oh_Hell)
#### Oshi-Zumo[¶](#oshi-zumo)
* Players must repeatedly bid to push a token off the other side of the board.
* Idiosyncratic format.
* Traditional game.
* Deterministic.
* Imperfect information.
* 2 players.
* [Buro, 2004. Solving the oshi-zumo game](https://link.springer.com/chapter/10.1007/978-0-387-35706-5_23)
[Bosansky et al. ‘16, Algorithms for Computing Strategies in Two-Player Simultaneous Move Games](http://mlanctot.info/files/papers/aij-2psimmove.pdf)
#### Oware[¶](#oware)
* Players redistribute tokens from their half of the board to capture tokens in the opponent’s part of the board.
* Idiosyncratic format.
* Traditional game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Oware)
#### Pathfinding[¶](#pathfinding)
* Agents must move to their desitnation.
* Agents on a grid. Single-agent game is the classic examples from Sutton &
Barto.
* Research game.
* Non-deterministic (in multiagent, collisions resolved by chance nodes).
* Perfect information.
* 1-10 players.
* Similar games appeared in
[Austerweil et al. ‘15](http://miaoliu.scripts.mit.edu/SSS-16/wp-content/uploads/2016/01/paper.pdf),
[Greenwald & Hall ‘03](https://www.aaai.org/Papers/ICML/2003/ICML03-034.pdf),
and [Littman ‘01](https://jmvidal.cse.sc.edu/library/littman01a.pdf).
#### Pentago[¶](#pentago)
* Players place tokens on the board, then rotate part of the board to a new orientation.
* Uses tokens on a grid.
* Modern game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Pentago)
#### Phantom Go[¶](#phantom-go)
* Go, except the opponent’s stones are hidden. The analogue of Kriegspiel for Go.
* Research game.
* Deterministic.
* Imperfect information.
* 2 players.
* [Cazenave ‘05, A Phantom Go Program](https://link.springer.com/chapter/10.1007/11922155_9)
#### Phantom Tic-Tac-Toe[¶](#phantom-tic-tac-toe)
* Tic-tac-toe, except the opponent’s tokens are hidden. Designed as a simple,
imperfect-information game.
* Uses tokens on a grid.
* Research game.
* Deterministic.
* Imperfect information.
* 2 players.
* [Auger ‘11, Multiple Tree for Partially Observable Monte-Carlo Tree Search](https://hal.archives-ouvertes.fr/hal-00563480v2/document),
[Lisy ‘14, Alternative Selection Functions for Information Set Monte Carlo Tree Search](https://core.ac.uk/download/pdf/81646968.pdf),
[Lanctot ‘13](http://mlanctot.info/files/papers/PhD_Thesis_MarcLanctot.pdf)
#### Pig[¶](#pig)
* Each player rolls a dice until they get a 1 or they ‘hold’; the rolled total is added to their score.
* Dice game.
* Traditional game.
* Non-deterministic.
* Perfect information.
* 2-10 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Pig_(dice_game))
#### Poker (Hold ‘em)[¶](#poker-hold-em)
* Players bet on whether their hand of cards plus some communal cards will form a special set.
* Cards with bidding.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 2-10 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Texas_hold_%27em)
* Implemented via [ACPC](http://www.computerpokercompetition.org/).
#### Quoridor[¶](#quoridor)
* Each turn, players can either move their agent or add a small wall to the board.
* Idiosyncratic format.
* Modern game.
* Deterministic.
* Perfect information.
* 2-4 players. (Note, from Wikipedia: “Though it can be played with 3 players,
it’s advised against. Since the 3rd player doesn’t have player on the opposite side, they have an advantage.”)
* [Wikipedia](https://en.wikipedia.org/wiki/Quoridor)
#### Reconnaissance Blind Chess[¶](#reconnaissance-blind-chess)
* Chess with opponent’s pieces unknown, with sensing moves.
* Chess variant, invented by <NAME> University Applied Physics Lab. Used in NeurIPS competition and Hidden Information Game Competition.
* Deterministic.
* Imperfect information.
* 2 players.
* [JHU APL Main site](https://rbc.jhuapl.edu/)
* [Markowitz et al. ‘18, On the Complexity of Reconnaissance Blind Chess](https://arxiv.org/abs/1811.03119)
* [Newman et al. ‘16, Reconnaissance blind multi-chess: an experimentation platform for ISR sensor fusion and resource management](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/9842/984209/Reconnaissance-blind-multi-chess--an-experimentation-platform-for-ISR/10.1117/12.2228127.short?SSO=1)
#### Routing game[¶](#routing-game)
* Players choose at each node where they go. They have an origin, a destination and a departure time and choose their route to minimize their travel time. Time spent on each link is a function of the number of players on the link when the player reaches the link.
* Network with choice of route.
* Research game.
* Simultaneous.
* Deterministic.
* Perfect information.
* Any number of players.
* [Cabannes et. al. ‘21, Solving N-player dynamic routing games with congestion: a mean field approach](https://arxiv.org/pdf/2110.11943.pdf).
#### Sheriff[¶](#sheriff)
* Bargaining game.
* Deterministic.
* Imperfect information.
* 2 players.
* Good for correlated equilibria.
* [Farina et al. ‘19, Correlation in Extensive-Form Games: Saddle-Point Formulation and Benchmarks](https://papers.nips.cc/paper/9122-correlation-in-extensive-form-games-saddle-point-formulation-and-benchmarks.pdf).
* Based on the board game “Sheriff of Nottingham”
[(bbg)](https://boardgamegeek.com/boardgame/157969/sheriff-nottingham)
#### Slovenian Tarok[¶](#slovenian-tarok)
* Trick-based card game with bidding.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 3-4 players.
* [Wikipedia](https://en.wikipedia.org/wiki/K%C3%B6nigrufen#Slovenia)
* [Luštrek et al. 2003, A program for playing Tarok](https://pdfs.semanticscholar.org/a920/70fe11f75f58c27ed907c4688747259cae15.pdf)
#### Skat (simplified bidding)[¶](#skat-simplified-bidding)
* Each turn, players bid to compete against the other two players.
* Cards with bidding.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 3 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Skat_(card_game))
#### Solitaire (K+)[¶](#solitaire-k)
* A single-player card game.
* Card game.
* Traditional game.
* Non-deterministic.
* Imperfect information.
* 1 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Klondike_(solitaire)) and
[Bjarnason et al. ‘07, Searching solitaire in real time](http://web.engr.oregonstate.edu/~afern/papers/solitaire.pdf)
#### Tic-Tac-Toe[¶](#tic-tac-toe)
* Players place tokens to try and form a pattern.
* Uses tokens on a grid.
* Traditional game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Tic-tac-toe)
#### Tiny Bridge[¶](#tiny-bridge)
* Simplified Bridge with fewer cards and tricks.
* Cards with bidding.
* Research game.
* Non-deterministic.
* Imperfect information.
* 2, 4 players.
* See implementation for details.
#### Tiny Hanabi[¶](#tiny-hanabi)
* Simplified Hanabi with just two turns.
* Idiosyncratic format.
* Research game.
* Non-deterministic.
* Imperfect information.
* 2-10 players.
* [Foerster et al 2018, Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning](https://arxiv.org/abs/1811.01458)
#### Trade Comm[¶](#trade-comm)
* Players with different utilities and items communicate and then trade.
* Idiosyncratic format.
* Research game.
* Non-deterministic.
* Imperfect information.
* 2 players.
* A simple emergent communication game based on trading.
#### Ultimate Tic-Tac-Toe[¶](#ultimate-tic-tac-toe)
* Players try and form a pattern in local boards and a meta-board.
* Uses tokens on a grid.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Ultimate_tic-tac-toe)
#### Y[¶](#y)
* Players place tokens to try and connect sides of a triangular board.
* Tokens on hex grid.
* Modern game.
* Deterministic.
* Perfect information.
* 2 players.
* [Wikipedia](https://en.wikipedia.org/wiki/Y_(game))
α-Rank[¶](#rank)
---
OpenSpiel now supports using Alpha-Rank
([“α-Rank: Multi-Agent Evaluation by Evolution”, 2019](https://www.nature.com/articles/s41598-019-45619-9))
for both single-population (symmetric) and multi-population games. Specifically,
games can be specified via payoff tables (or tensors for the >2 players case) as well as Heuristic Payoff Tables (HPTs).
The following presents several typical use cases for Alpha-Rank. For an example complete python script, refer to
[open_spiel/python/egt/examples/alpharank_example.py](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/egt/examples/alpharank_example.py).
### Importing the Alpha-Rank module[¶](#importing-the-alpha-rank-module)
```
from open_spiel.python.egt import alpharank from open_spiel.python.egt import alpharank_visualizer
```
### Running Alpha-Rank on various games[¶](#running-alpha-rank-on-various-games)
#### Example: symmetric 2-player game rankings[¶](#example-symmetric-2-player-game-rankings)
In this example, we run Alpha-Rank on a symmetric 2-player game
(Rock-Paper-Scissors), computing and outputting the rankings in a tabular format. We demonstrate also the conversion of standard payoff tables to Heuristic Payoff Tables (HPTs), as both are supported by the ranking code.
```
# Load the game game = pyspiel.load_matrix_game("matrix_rps")
payoff_tables = utils.game_payoffs_array(game)
# Convert to heuristic payoff tables payoff_tables= [heuristic_payoff_table.from_matrix_game(payoff_tables[0]),
heuristic_payoff_table.from_matrix_game(payoff_tables[1].T)]
# Check if the game is symmetric (i.e., players have identical strategy sets
# and payoff tables) and return only a single-player’s payoff table if so.
# This ensures Alpha-Rank automatically computes rankings based on the
# single-population dynamics.
_, payoff_tables = utils.is_symmetric_matrix_game(payoff_tables)
# Compute Alpha-Rank
(rhos, rho_m, pi, num_profiles, num_strats_per_population) = alpharank.compute(
payoff_tables, alpha=1e2)
# Report results alpharank.print_results(payoff_tables, payoffs_are_hpt_format, pi=pi)
```
**Output**
```
Agent Rank Score
--- --- ---
0 1 0.33 1 1 0.33 2 1 0.33
```
#### Example: multi-population game rankings[¶](#example-multi-population-game-rankings)
The next example demonstrates computing Alpha-Rank on an asymmetric 3-player meta-game, constructed by computing payoffs for Kuhn poker agents trained via extensive-form fictitious play (XFP). Here we use a helper function,
`compute_and_report_alpharank`, which internally conducts the pre-processing and visualization shown in the previous example.
```
# Load the game payoff_tables = alpharank_example.get_kuhn_poker_data(num_players=3)
# Helper function for computing & reporting Alpha-Rank outputs alpharank.compute_and_report_alpharank(payoff_tables, alpha=1e2)
```
**Output**
```
Agent Rank Score
--- --- ---
(2,3,3) 1 0.22
(3,3,3) 2 0.14
(3,2,3) 3 0.12
(2,2,3) 4 0.09
(3,1,3) 5 0.08
(2,1,3) 6 0.05
(1,2,3) 7 0.04
(2,3,1) 8 0.02
... ... ...
```
### Visualizing and reporting results[¶](#visualizing-and-reporting-results)
This section provides details on various methods used for reporting the final Alpha-Rank results.
#### Basic Ranking Outputs[¶](#basic-ranking-outputs)
The final rankings computed can be printed in a tabular manner using the following interface:
```
alpharank.print_results(payoff_tables, payoffs_are_hpt_format, pi=pi)
```
**Output**
```
Agent Rank Score
--- --- ---
0 1 0.33 1 1 0.33 2 1 0.33
```
#### Markov Chain Visualization[¶](#markov-chain-visualization)
One may visualize the Alpha-Rank Markov transition matrix as follows:
```
m_network_plotter = alpharank_visualizer.NetworkPlot(payoff_tables, rhos,
rho_m, pi,strat_labels,
num_top_profiles=8)
m_network_plotter.compute_and_draw_network()
```
**Output**
#### Alpha-sweep plots[¶](#alpha-sweep-plots)
One may choose to conduct a sweep over the ranking-intensity parameter, alpha
(as opposed to choosing a fixed alpha). This is, in general, useful for general games where bounds on payoffs may be unknown, and where the ranking computed by Alpha-Rank should use a sufficiently high value of alpha (to ensure correspondence to the underlying Markov-Conley chain solution concept). In such cases, the following interface can be used to both visualize the sweep and obtain the final rankings computed:
```
alpharank.sweep_pi_vs_alpha(payoff_tables, visualize=True)
```
**Output**
Julia OpenSpiel[¶](#julia-openspiel)
---
We also provide a Julia wrapper for the OpenSpiel project. Most APIs are aligned with those in Python (some are extended to accept `AbstractArray` and/or keyword arguments for convenience). See `spiel.h` for the full API description.
### Install[¶](#install)
For general usage, you can install this package in the Julia REPL with
`] add OpenSpiel`. Note that this method only supports the Linux platform and ACPC is not included. For developers, you need to follow the instructions bellow to install this package:
1. Install Julia and dependencies. Edit
`open_spiel/scripts/global_variables.sh` and set
`OPEN_SPIELOPEN_SPIEL_BUILD_WITH_JULIA=ON` (you may also turn on other options as you wish). Then run `./install.sh`. If you already have Julia installed on your system, make sure that it is visible in your terminal and its version is v1.3 or later. Otherwise, Julia v1.3.1 will be automatically installed in your home dir and a soft link will be created at
`/usr/local/bin/julia`.
2. Build and run tests
```
./open_spiel/scripts/build_and_run_tests.sh
```
3. Install `] dev ./open_spiel/julia` (run in Julia REPL).
### Known Problems[¶](#known-problems)
1. There’s a problem when building this package on Mac with XCode v11.4 or above (see discussions
[here](https://github.com/deepmind/open_spiel/pull/187#issuecomment-616540881)).
To fix it, you need to install the latest `libcxxwrap` by following the instructions
[here](https://github.com/JuliaInterop/libcxxwrap-julia#building-libcxxwrap-julia)
after running `./install.sh`. Then make sure that the result of `julia --project=./open_spiel/julia -e 'using CxxWrap; print(CxxWrap.prefix_path())'` points to the newly built `libcxxwrap`. After that, build and install this package as stated above.
### Example[¶](#example)
Here we demonstrate how to use the Julia API to play one game:
```
using OpenSpiel
# Here we need the StatsBase package for weighted sampling using Pkg Pkg.add("StatsBase")
using StatsBase
function run_once(name)
game = load_game(name)
state = new_initial_state(game)
println("Initial state of game[$(name)] is:\n$(state)")
while !is_terminal(state)
if is_chance_node(state)
outcomes_with_probs = chance_outcomes(state)
println("Chance node, got $(length(outcomes_with_probs)) outcomes")
actions, probs = zip(outcomes_with_probs...)
action = actions[sample(weights(collect(probs)))]
println("Sampled outcome: $(action_to_string(state, action))")
apply_action(state, action)
elseif is_simultaneous_node(state)
chosen_actions = [rand(legal_actions(state, pid-1)) for pid in 1:num_players(game)] # in Julia, indices start at 1
println("Chosen actions: $([action_to_string(state, pid-1, action) for (pid, action) in enumerate(chosen_actions)])")
apply_action(state, chosen_actions)
else
action = rand(legal_actions(state))
println("Player $(current_player(state)) randomly sampled action: $(action_to_string(state, action))")
apply_action(state, action)
end
println(state)
end
rts = returns(state)
for pid in 1:num_players(game)
println("Utility for player $(pid-1) is $(rts[pid])")
end end
run_once("tic_tac_toe")
run_once("kuhn_poker")
run_once("goofspiel(imp_info=True,num_cards=4,points_order=descending)")
```
### Q&A[¶](#q-a)
1. What is `StdVector`?
`StdVector` is introduced in
[CxxWrap.jl](https://github.com/JuliaInterop/CxxWrap.jl) recently. It is a wrapper of `std::vector` in the C++ side. Since that it is a subtype of
`AbstractVector`, most functions should just work out of the box.
2. `0-based` or `1-based`?
As this package is a low-level wrapper of OpenSpiel C++, most APIs are zero-based: for instance, the `Player` id starts from zero. But note that some bridge types, like `StdVector`, implicitly convert between indexing conventions, so APIs that use `StdVector` are one-based.
3. I can’t find the `xxx` function/type in the Julia wrapper/The program exits unexpectedly.
Although most of the functions and types should be exported, there is still a chance that some APIs are not well tested. So if you encounter any error,
please do not hesitate to create an issue.
AlphaZero[¶](#alphazero)
---
OpenSpiel includes three implementations of AlphaZero, two based on Tensorflow
(one in Python and one in C++ using Tensorflow C++ API), with a shared model written in TensorFlow. The other based on C++ Libtorch-base. This document covers mostly the TF-based implementation and common components. For the Libtorch-based implementation,
[see here](https://github.com/deepmind/open_spiel/tree/master/open_spiel/algorithms/alpha_zero_torch).
**Disclaimer**: this is not the code that was used for the Go challenge matches or the AlphaZero paper results. It is a re-implementation for illustrative purposes, and although it can handle games like Connect Four, it is not designed to scale to superhuman performance in Go or Chess.
### Background[¶](#background)
AlphaZero is an algorithm for training an agent to play perfect information games from pure self-play. It uses Monte Carlo Tree Search (MCTS) with the prior and value given by a neural network to generate training data for that neural network.
Links to relevant articles/papers:
* [AlphaGo Zero: Starting from scratch](https://deepmind.com/blog/article/alphago-zero-starting-scratch)
has an open access link to the AlphaGo Zero nature paper that describes the model in detail.
* [AlphaZero: Shedding new light on chess, shogi, and Go](https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go)
has an open access link to the AlphaZero science paper that describes the training regime and generalizes to more games.
### Overview:[¶](#overview)
The Python and C++ implementations are conceptually fairly similar, and have roughly the same components: [actors](#actors) that generate data through self-play using [MCTS](#mcts) with an [evaluator](#mcts-evaluator) that uses a
[neural network](#model), a [learner](#learner) that updates the network based on those games, and [evaluators](#evaluators) playing vs standard MCTS to gauge progress. Both [write checkpoints](#output) that can be [played](#playing-vs-checkpoints)
independently of the training setup, and logs that can be [analyzed](#analysis)
programmatically.
The Python implementation uses one process per actor/evaluator, doesn’t support batching for inference and does all inference and training on the cpu. The C++
implementation, by contrast, uses threads, a shared cache, supports batched inference, and can do both inference and training on GPUs. As such the C++
implementation can take advantage of additional hardware and can train significantly faster.
#### Model[¶](#model)
The model defined in
[open_spiel/python/algorithms/alpha_zero/model.py](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/algorithms/alpha_zero/model.py) is used by both the python and C++ implementations. The C++ version wraps the exported tensorflow graph in
[open_spiel/algorithms/alpha_zero/vpnet.h](https://github.com/deepmind/open_spiel/blob/master/open_spiel/algorithms/alpha_zero/vpnet.h), and supports both inference and training.
The model defines three architectures in decreasing complexity:
* resnet: same as the AlphaGo/AlphaZero paper when set with width 256 and depth 20.
* conv2d: same as the resnet except uses a conv+batchnorm+relu instead of the residual blocks.
* mlp: same as conv2d except uses dense layers instead of conv, and drops batch norm.
The model is parameterized by the size of the observations and number of actions for the game you specify, so can play any 2-player game. The conv2d and resnet models are restricted to games with a 2d representation (ie a 3d observation tensor).
The models are all parameterized with a width and depth:
* The depth is the number of blocks in the torso, where the definition of a block varies by model. For a resnet it’s a resblock which is two conv2ds,
batch norms and relus, and an addition. For conv2d it’s a conv2d, a batch norm and a relu. For mlp it’s a dense plus relu.
* The width is the number of filters for any conv2d and the number of hidden units for any dense layer.
The networks all give two outputs: a value and a policy, which are used by the MCTS evaluator.
#### MCTS[¶](#mcts)
Monte Carlo Tree Search (MCTS) is a general search algorithm used to play many games, but first found success playing Go back in ~2005. It builds a tree directed by random rollouts, and does usually uses UCT to direct the exploration/exploitation tradeoff. For our use case we replace random rollouts with a value network. Instead of a uniform prior we use a policy network.
Instead of UCT we use PUCT.
We have implementations of MCTS in
[C++](https://github.com/deepmind/open_spiel/blob/master/open_spiel/algorithms/mcts.h) and
[python](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/algorithms/mcts.py).
#### MCTS Evaluator[¶](#mcts-evaluator)
Both MCTS implementations above have a configurable evaluator that returns the value and prior policy of a given node. For standard MCTS the value is given by random rollouts, and the prior policy is uniform. For AlphaZero the value and prior are given by a neural network evaluation. The AlphaZero evaluator takes a model, so can be used during training or with a trained checkpoint for play with
[open_spiel/python/examples/mcts.py](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/examples/mcts.py).
#### Actors[¶](#actors)
The main script launches a set of actor processes (Python) or threads (C++). The actors create two MCTS instances with a shared evaluator and model, and play self-play games, passing the trajectories to the learner via a queue. The more actors the faster it can generate training data, assuming you have sufficient compute to actually run them. Too many actors for your hardware will mean longer for individual games to finish and therefore your data could be more out of date with respect to the up to date checkpoint/weights.
#### Learner[¶](#learner)
The learner pulls trajectories from the actors and stores them in a fixed size FIFO replay buffer. Once the replay buffer has enough new data, it does an update step sampling from the replay buffer. It then saves a checkpoint and updates all the actor’s models. It also updates a `learner.jsonl` file with some stats.
#### Evaluators[¶](#evaluators)
The main script also launches a set of evaluator processes/threads. They continually play games against a standard MCTS+Solver to give an idea of how training is progressing. The MCTS opponents can be scaled in strength based on the number of simulations they are given per move, so more levels means stronger but slower opponents.
#### Output[¶](#output)
When running the algorithm a directory must be specified and all output goes there.
Due to the parallel nature of the algorithm writing logs to stdout/stderr isn’t very useful, so each actor/learner/evaluator writes its own log file to the configured directory.
Checkpoints are written after every update step, mostly overwriting the latest one at `checkpoint--1` but every `checkpoint_freq` is saved at
`checkpoint-<step>`.
The config file is written to `config.json`, to make the experiment more repeatable.
The learner also writes machine readable logs in the
[jsonlines](http://jsonlines.org/) format to `learner.jsonl`, which can be read with the analysis library.
### Usage:[¶](#usage)
#### Python[¶](#python)
The code lives at [open_spiel/python/algorithms/alpha_zero/](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/algorithms/alpha_zero/).
The simplest example trains a tic_tac_toe agent for a set number of training steps:
```
python3 open_spiel/python/examples/tic_tac_toe_alpha_zero.py
```
Alternatively you can train on an arbitrary game with many more options:
```
python3 open_spiel/python/examples/alpha_zero.py --game connect_four --nn_model mlp --actors 10
```
#### C++[¶](#c)
The code lives at [open_spiel/algorithms/alpha_zero/](https://github.com/deepmind/open_spiel/blob/master/open_spiel/algorithms/alpha_zero/)
with an example executable at
[open_spiel/examples/alpha_zero_example.cc](https://github.com/deepmind/open_spiel/blob/master/open_spiel/examples/alpha_zero_example.cc).
Compiling it is now possible with the help of the
[tensorflow_cc](https://github.com/FloopCZ/tensorflow_cc) project. TensorflowCC allows the usage of the TensorFlow C++ API from outside the Tensorflow source directory.
For build instructions, please see
[open_spiel/algorithms/alpha_zero/README.md](https://github.com/deepmind/open_spiel/blob/master/open_spiel/algorithms/alpha_zero/README).
Although targets are built successfully, there are still some runtime issues.
[OpenSpiel Issue #172](https://github.com/deepmind/open_spiel/issues/172) has some information that may help figure out how to fix them. Contributions are welcome.
#### Analysis[¶](#analysis)
There’s an analysis library at
[open_spiel/python/algorithms/alpha_zero/analysis.py](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/algorithms/alpha_zero/analysis.py) which reads the `config.json` and `learner.jsonl` from an experiment (either python or C++), and graphs losses, value accuracy, evaluation results, actor speed, game lengths, etc. It should be reasonable to turn this into a colab.
#### Playing vs checkpoints[¶](#playing-vs-checkpoints)
The checkpoints are compatible between python and C++, and can be loaded by the model. You can try playing against one directly with
[open_spiel/python/examples/mcts.py](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/examples/mcts.py):
```
python3 open_spiel/python/examples/mcts.py --game=tic_tac_toe --player1=human --player2=az --az_path <path to your checkpoint directory>
```
The code structure[¶](#the-code-structure)
---
Generally speaking, the directories directly under `open_spiel` are C++ (except for `integration_tests` and `python`). A similar structure is available in
`open_spiel/python`, containing the Python equivalent code.
Some top level directories are special:
* `open_spiel/integration_tests`: Generic (python) tests for all the games.
* `open_spiel/tests`: The C++ common test utilities.
* `open_spiel/scripts`: The scripts useful for development (building, running tests, etc).
For example, we have for C++:
* `open_spiel/`: Contains the game abstract C++ API.
* `open_spiel/games`: Contains the games C++ implementations.
* `open_spiel/algorithms`: The C++ algorithms implemented in OpenSpiel.
* `open_spiel/examples`: The C++ examples.
* `open_spiel/tests`: The C++ common test utilities.
For Python you have:
* `open_spiel/python/examples`: The Python examples.
* `open_spiel/python/algorithms/`: The Python algorithms.
C++ and Python implementations.[¶](#c-and-python-implementations)
---
Some objects (e.g. `Policy`, `CFRSolver`, `BestResponse`) are available both in C++ and Python. The goal is to be able to use C++ objects in place of Python objects for most of the cases. In particular, for the objects that are well supported, expect to have in the test for the Python object, a test checking that both the C++ and the Python implementation behave the same.
Adding a game[¶](#adding-a-game)
---
We describe here only the simplest and fastest way to add a new game. It is ideal to first be aware of the general API (see `open_spiel/spiel.h`).
1. Choose a game to copy from in `open_spiel/games/` (or
`open_spiel/python/games/`). Suggested games: Tic-Tac-Toe and Breakthrough for perfect information without chance events, Backgammon or Pig for perfect information games with chance events,
Goofspiel and Oshi-Zumo for simultaneous move games, and Leduc poker and Liar’s dice for imperfect information games. For the rest of these steps, we assume Tic-Tac-Toe.
2. Copy the header and source: `tic_tac_toe.h`, `tic_tac_toe.cc`, and
`tic_tac_toe_test.cc` to `new_game.h`, `new_game.cc`, and `new_game_test.cc`
(or `tic_tac_toe.py` and `tic_tac_toe_test.py`).
3. Configure CMake:
* If you are working with C++: add the new game’s source files to
`open_spiel/games/CMakeLists.txt`.
* If you are working with C++: add the new game’s test target to
`open_spiel/games/CMakeLists.txt`.
* If you are working with Python: add the test to
`open_spiel/python/CMakeLists.txt` and import it in
`open_spiel/python/games/__init__.py`
4. Update boilerplate C++/Python code:
* In `new_game.h`, rename the header guard at the the top and bottom of
the file.
* In the new files, rename the inner-most namespace from `tic_tac_toe` to
`new_game`.
* In the new files, rename `TicTacToeGame` and `TicTacToeState` to
`NewGameGame` and `NewGameState`.
* At the top of `new_game.cc`, change the short name to `new_game` and
include the new game’s header.
5. Update Python integration tests:
* Add the short name to the list of expected games in
`open_spiel/python/tests/pyspiel_test.py`.
6. You should now have a duplicate game of Tic-Tac-Toe under a different name.
It should build and the test should run, and can be verified by rebuilding and running the example `build/examples/example --game=new_game`.
7. Now, change the implementations of the functions in `NewGameGame` and
`NewGameState` to reflect your new game’s logic. Most API functions should be clear from the game you copied from. If not, each API function that is overridden will be fully documented in superclasses in `open_spiel/spiel.h`.
8. To test the game as it is being built, you can play test the functionality interactively using `ConsolePlayTest` in
`open_spiel/tests/console_play_test.h`. At the very least, the test should include some random simulation tests (see other game’s tests for an example).
9. Run your code through a linter so it conforms to Google’s
[style guides](https://google.github.io/styleguide/). For C++ use
[cpplint](https://pypi.org/project/cpplint/). For Python, use
[pylint](https://pypi.org/project/pylint/) with the
[pylintrc from the Google style guide](https://google.github.io/styleguide/pyguide.html).
There is also [YAPF](https://github.com/google/yapf/) for Python as well.
10. Once done, rebuild and rerun the tests to ensure everything passes
(including your new game’s test!).
11. Add a playthrough file to catch regressions:
* Run `./open_spiel/scripts/generate_new_playthrough.sh new_game` to
generate a random game, to be used by integration tests to prevent any
regression. `open_spiel/integration_tests/playthrough_test.py` will
automatically load the playthroughs and compare them to newly generated
playthroughs.
* If you have made a change that affects playthroughs, run
`./scripts/regenerate_playthroughs.sh` to update them.
Conditional dependencies[¶](#conditional-dependencies)
---
The goal is to make it possible to optionally include external dependencies and build against them. The setup was designed to met the following needs:
* **Single source of truth**: We want a single action to be sufficient to manage the conditional install and build. Thus, we use bash environment variables, that are read both by the install script (`install.sh`) to know whether we should clone the dependency, and by CMake to know whether we should include the files in the target. Tests can also access the bash environment variable.
* **Light and safe defaults**: By default, we exclude the dependencies to diminish install time and compilation time. If the bash variable is unset,
we download the dependency and we do not build against it.
* **Respect the user-defined values**: The `global_variables.sh` script, which is included in all the scripts that needs to access the constant values, do not override the constants but set them if and only if they are undefined.
This respects the user-defined values, e.g. on their `.bashrc` or on the command line.
When you add a new conditional dependency, you need to touch:
* the root CMakeLists.txt to add the option, with an OFF default
* add the option to `scripts/global_variables.sh`
* change `install.sh` to make sure the dependency is installed
* use constructs like `if (${OPEN_SPIEL_BUILD_WITH_HANABI})` in CMake to optionally add the targets to build.
Debugging tools[¶](#debugging-tools)
---
For complex games it may be tricky to get all the details right. Reading through the playthrough (or visually inspecting random games via the example) is the first step in verifying the game mechanics. You can visualize small game trees using [open_spiel/python/examples/treeviz_example.py](https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/examples/treeviz_example.py) or for large games there is an interactive viewer for OpenSpiel games called
[SpielViz](https://github.com/michalsustr/spielviz).
Adding Game-Specific Functionality[¶](#adding-game-specific-functionality)
---
OpenSpiel focuses on maintaining a general API to an underlying suite of games,
but sometimes it is convenient to work on specific games. In this section, we describe how to get (or set) game-specific information from/to the generic state objects, and how to expose these functions to python.
Suppose, for example, we want to look at (or set) the private cards in a game of Leduc poker. We will use an example based on this
[this commit](https://github.com/deepmind/open_spiel/commit/4cd1e5889e447d285eb3f16901ccab5c14e62187).
1. First, locate the game you want to access. The game implementations are in the `games/` subdirectory and have two main files: e.g. `leduc_poker.h`
(header) and `leduc_poker.cc` (implementation).
2. For simple accessor methods that just return the information and feel free have the full implementation to the game’s header file (e.g.
`LeducState::GetPrivateCards`). You can also declare the function in the header and provide the implementation in source file (e.g.
`LeducPoker::SetPrivateCards`).
3. That’s it for the core game logic. To expose these methods to Python, add them to the Python module (via pybind11). Some games already have game-specific functionality, so if a files named `games_leduc_poker.h` and
`games_leduc_poker.cc` exist within `python/pybind11`, add to them (skip to Step 5).
4. If the games-specific files do not exist for your game of interest, then:
* Add the files. Copy one of the other ones, adapt the names, and remove
most of the bindings code.
* Add the new files to the `PYTHON_BINDINGS` list in
`python/CMakeFiles.txt`.
* Modify `pyspiel.cc`: include the header at the top, and call the init
function at the bottom.
5. Add the custom methods to the game-specific python bindings
(`games_leduc_poker.cc`, i.e. `LeducPoker::GetPrivateCards` and
`LeducPoker::SetPrivateCards`). For simple types, this should be relatively straight-forward; you can see how by looking at the other game-specific functions. For complex types, you may have to bind additional code (see e.g.
`games_backgammon.cc`). If it is unclear, do not hesitate to ask, but also please check the
[pybind11 documentation](https://pybind11.readthedocs.io/en/stable/).
6. Add a simple test to `python/games_sim_test.py` to check that it worked. For inspiration, see e.g. `test_leduc_get_and_set_private_cards`.
Language APIs[¶](#language-apis)
---
There are currently four other language APIs that expose functionality from the C++ core.
* [Python](https://github.com/deepmind/open_spiel/tree/master/open_spiel/python).
* [Julia](https://github.com/deepmind/open_spiel/tree/master/open_spiel/julia)
* [Go](https://github.com/deepmind/open_spiel/tree/master/open_spiel/go)
(experimental)
* [Rust](https://github.com/deepmind/open_spiel/tree/master/open_spiel/rust)
(experimental)
Guidelines[¶](#guidelines)
---
Above all, OpenSpiel is designed to be easy to install and use, easy to understand, easy to extend (“hackable”), and general/broad. OpenSpiel is built around two major important design criteria:
* **Keep it simple.** Simple choices are preferred to more complex ones. The code should be readable, usable, extendable by non-experts in the programming language(s), and especially to researchers from potentially different fields. OpenSpiel provides reference implementations that are used to learn from and prototype with, rather than fully-optimized /
high-performance code that would require additional assumptions (narrowing the scope / breadth) or advanced (or lower-level) language features.
* **Keep it light.** Dependencies can be problematic for long-term compatibility, maintenance, and ease-of- use. Unless there is strong justification, we tend to avoid introducing dependencies to keep things easy to install and more portable.
Support expectations[¶](#support-expectations)
---
We, the OpenSpiel authors, definitely engage in supporting the community. As it can be time-consuming, we try to find a good balance between ensuring we are responsive and being able to continue to do our day-to-day work and research.
Generally speaking, if you are willing to get a specific feature implemented,
the most effective way is to implement it and send a Pull Request. For large changes, or ones involving design decisions, open a bug to check the idea is ok first.
The higher the quality, the easier it will be to be accepted. For instance,
following the
[C++ Google style guide](https://google.github.io/styleguide/cppguide.html) and
[Python Google style guide](http://google.github.io/styleguide/pyguide.html)
will help with the integration.
As examples, MacOS support, Window support, example improvements, various bug-fixes or new games has been straightforward to be included and we are very thankful to everyone who helped.
### Bugs[¶](#bugs)
We aim to answer bugs at a reasonable pace, several times a week. However, for bugs involving large changes (e.g. adding new games, adding public state supports) we cannot commit to implementing it and encourage everyone to contribute directly.
### Pull requests[¶](#pull-requests)
You can expect us to answer/comment back and you will know from the comment if it will be merged as is or if it will need additional work.
For pull requests, they are merged as batches to be more efficient, at least every two weeks (for bug fixes, it will likely be faster to be integrated). So you may need to wait a little after it has been approved to actually see it merged.
Roadmap and Call for Contributions[¶](#roadmap-and-call-for-contributions)
---
Contributions to this project must be accompanied by a Contributor License Agreement (CLA). See
[CONTRIBUTING.md](https://github.com/deepmind/open_spiel/blob/master/CONTRIBUTING)
for the details.
Here, we outline our intentions for the future, giving an overview of what we hope to add over the coming years. We also suggest a number of contributions that we would like to see, but have not had the time to add ourselves.
Before making a contribution to OpenSpiel, please read the guidelines. We also kindly request that you contact us before writing any large piece of code, in case (a) we are already working on it and/or (b) it’s something we have already considered and may have some design advice on its implementation. Please also note that some games may have copyrights which might require legal approval.
Otherwise, happy hacking!
The following list is both a Call for Contributions and an idealized road map.
We certainly are planning to add some of these ourselves (and, in some cases already have implementations that were just not tested well enough to make the release!). Contributions are certainly not limited to these suggestions!
* **Checkers / Draughts**. This is a classic game and an important one in the history of game AI
([”Checkers is solved”](https://science.sciencemag.org/content/317/5844/1518)).
* **Chinese Checkers / Halma**.
[Chinese Checkers](https://en.wikipedia.org/wiki/Chinese_checkers) is the canonical multiplayer (more than two player) perfect information game.
Currently, OpenSpiel does not contain any games in this category.
* **Deep TreeStrap**. An implementation of TreeStrap (see
[Bootstrapping from Game Tree Search](https://www.cse.unsw.edu.au/~blair/pubs/2009VenessSilverUtherBlairNIPS.pdf)),
except with a DQN-like replay buffer, storing value targets obtained from minimax searches. We have an initial implementation, but it is not yet ready for release. We also hope to support PyTorch for this algorithm as well.
* **Deep Regret Minimization with Advantage Baselines and Model-free Learning
(DREAM)**. This is a model-free technique based on Monte Carlo CFR with function approximation, that has been applied to Poker.
([Ref](https://arxiv.org/abs/2006.10410))
* **Double Neural Counterfactual Regret Minimization**. This is a technique similar to Regression CFR that uses a robust sampling technique and a new network architecture that predicts both the cumulative regret *and* the average strategy. ([Ref](https://arxiv.org/abs/1812.10607))
* **Differentiable Games and Algorithms**. For example, Symplectic Gradient Adjustment ([Ref](https://arxiv.org/abs/1802.05642)).
* **Emergent Communication Algorithms**. For example,
[RIAL and/or DIAL](https://arxiv.org/abs/1605.06676) and
[CommNet](https://arxiv.org/abs/1605.07736).
* **Emergent Communication Games**. Referential games such as the ones in
[Ref1](https://arxiv.org/abs/1612.07182),
[Ref2](https://arxiv.org/abs/1710.06922),
[Ref3](https://arxiv.org/abs/1705.11192).
* **Extensive-form Evolutionary Dynamics**. There have been a number of different evolutionary dynamics suggested for the sequential games, such as state-coupled replicator dynamics
([Ref](https://dl.acm.org/citation.cfm?id=1558120)), sequence-form replicator dynamics ([Ref1](https://arxiv.org/abs/1304.1456),
[Ref2](http://mlanctot.info/files/papers/aamas14sfrd-cfr-kuhn.pdf)),
sequence-form Q-learning
([Ref](https://dl.acm.org/citation.cfm?id=2892753.2892835)), and the logit dynamics ([Ref](https://dl.acm.org/citation.cfm?id=3015889)).
* **General Games Wrapper**. There are several general game engine languages and databases of general games that currently exist, for example within the
[general game-playing project](http://www.ggp.org/) and the
[Ludii General Game System](http://www.ludii.games/index.html). A very nice addition to OpenSpiel would be a game that interprets games represented in these languages and presents them as OpenSpiel games. This could lead to the potential of evaluating learning agents on hundreds to thousands of games.
* **Go API**. We currently have an experimental [Go](https://golang.org/) API similar to the Python API. It is exposed using cgo via a C API much like the CFFI Python bindings from the
[Hanabi Learning Environment](https://github.com/deepmind/hanabi-learning-environment).
It is very basic, only exposing the games. It would be nice to have a few example algorithms and/or utilities written in go.
* **Opponent Modeling / Shaping Algorithms**. For example,
[DRON](https://arxiv.org/abs/1609.05559),
[LOLA](https://arxiv.org/abs/1709.04326), and
[Stable Opponent Shaping](https://arxiv.org/abs/1811.08469).
* **Rust API**. We currently have an experimental
[Rust](https://www.rust-lang.org/) API. It is exposed via a C API much like the Go API. It is very basic, only exposing the games. It would be nice to have a few example algorithms and/or utilities written in Rust.
* **Sequential Social Dilemmas**. Sequential social dilemmas, such as the ones found in [Ref1](https://arxiv.org/abs/1702.03037),
[Ref2](https://arxiv.org/abs/1707.06600) . Wolfpack could be a nice one,
since pursuit-evasion games have been common in the literature
([Ref](http://web.media.mit.edu/~cynthiab/Readings/tan-MAS-reinfLearn.pdf)).
Also the coin games from [Ref1](https://arxiv.org/abs/1707.01068) and
[Ref2](https://arxiv.org/abs/1709.04326), and Clamity, Cleanup and/or Harvest from [Ref3](https://arxiv.org/abs/1812.07019)
[Ref4](https://arxiv.org/abs/1810.08647).
* **Structured Action Spaces**. Currently, actions are integers between 0 and some value. There is no easy way to interpret what each action means in a game-specific way. Nor is there any way to easily represent a composite action in terms of its parts. A structured action space could represent actions as a sequence of values (like information states and observations–
and can also include shapes) which can be learned instead of mappings to flat numbers. Then, each game could have a mapping from the structured action to the action taken.
* **TF_Trajectories**. The source code currently includes a batch inference for running a batch of episodes using Tensorflow directly from C++ (in
`contrib/`). It has not yet been tested with CMake and public Tensorflow. We would like to officially support this and move it into the core library.
* **Visualizations of games**. There exists an interactive viewer for OpenSpiel games called [SpielViz](https://github.com/michalsustr/spielviz).
Contributions to this project, and more visualization tools with OpenSpiel,
are welcome.
* **Windows support**. Native Windows support was added in early 2022, but remains experimental and only via building from source. It would be nice to have Github Actions CI support on Windows to ensure that Windows support is actively maintained, and eventually support installing OpenSpiel via pip on Windows as well.
Using OpenSpiel as a C++ Library[¶](#using-openspiel-as-a-c-library)
---
OpenSpiel has been designed as a framework: a suite of games, algorithms, and tools for research in reinforcement learning and search in games. However, there are situations where one may only want or need a single game/algorithm or small subset from this collection, or a research experiment does not require modifying or otherwise interacting very closely with OpenSpiel other than strictly calling/using it.
In cases like this, it might be nice to use OpenSpiel as a library rather than a framework. This has the benefit of not forcing the use of certain tools like CMake or having to continually recompile OpenSpiel when doing your research.
Luckily, this is easy to achieve with OpenSpiel: you simply need to build it as a shared library once, and then load it dynamically at runtime. This page walks through how to do this assuming a bash shell on Linux, but is very similar on MacOS or for other shells.
### Install Dependencies[¶](#install-dependencies)
The dependencies of OpenSpiel need to be installed before it can be used as a library. On MacOS and Debian/Ubuntu Linux, this is often simply just running
`./install.sh`. Please see the [installation from source instructions](https://github.com/deepmind/open_spiel/blob/master/docs/install.md#installation-from-source) for more details.
### Compiling OpenSpiel as a Shared Library[¶](#compiling-openspiel-as-a-shared-library)
To build OpenSpiel as a shared library, simply run:
```
mkdir build cd build BUILD_SHARED_LIB=ON CXX=clang++ cmake -DPython3_EXECUTABLE=$(which python3) -DCMAKE_CXX_COMPILER=${CXX} ../open_spiel make -j$(nproc) open_spiel
```
This produces a dynamically-linked library `libopen_spiel.so` (or
`lib_openspiel.dylib` on MacOS) in `build/` that can be linked against and loaded dynamically at run-time.
Suppose OpenSpiel was installed in `$HOME/open_spiel`. The following line adds the necessary environment variable to let the shell know where to find
`libopen_spiel.so` at run-time:
```
export LD_LIBRARY_PATH="${HOME}/open_spiel/build"
```
You might want to add this line to your `$HOME/.bash_profile` to avoid having to do it every time you load the library. Of course, if you are already using
`LD_LIBRARY_PATH` for something else, then you need to add
`${HOME}/open_spiel/build` to it (space-separated paths).
### Compiling and Running the Example[¶](#compiling-and-running-the-example)
```
cd ../open_spiel/examples clang++ -I${HOME}/open_spiel -I${HOME}/open_spiel/open_spiel/abseil-cpp \
-std=c++17 -o shared_library_example shared_library_example.cc \
-L${HOME}/open_spiel/build -lopen_spiel
```
The first two flags are the include directory paths and the third is the link directory path. The `-lopen_spiel` instructs the linker to link against the OpenSpiel shared library.
That’s it! Now you can run the example using:
```
./shared_library_example breakthrough
```
You should also be able to register new games externally without the implementation being within OpenSpiel nor built into the shared library, though we are always interested in growing the library and recommend you contact us about contributing any new games to the suite.
Authors[¶](#authors)
---
Names are ordered lexicographically. Typo or similar contributors are omitted.
### OpenSpiel contributors[¶](#openspiel-contributors)
* <NAME>
* <NAME>
* <NAME> [<EMAIL>](mailto:locked%40google.com)
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME> [<EMAIL>](mailto:jblespiau%40google.com)
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME> [<EMAIL>](mailto:lanctot%40google.com)
* <NAME>
* <NAME> [<EMAIL>](mailto:michal.sustr%40aic.fel.cvut.cz)
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME> [<EMAIL>](mailto:tewalds%40google.com)
* <NAME> [<EMAIL>](mailto:vzambaldi%40google.com)
### OpenSpiel with Swift for Tensorflow (now removed)[¶](#openspiel-with-swift-for-tensorflow-now-removed)
* <NAME> [<EMAIL>](mailto:jekbradbury%40google.com)
* <NAME> [<EMAIL>](mailto:saeta%40google.com)
* <NAME> [<EMAIL>](mailto:danielzheng%40google.com)
### External contributors[¶](#external-contributors)
See https://github.com/deepmind/open_spiel/graphs/contributors. |
@aws-sdk/client-datasync | npm | JavaScript | [@aws-sdk/client-datasync](#aws-sdkclient-datasync)
===
[Description](#description)
---
AWS SDK for JavaScript DataSync Client for Node.js, Browser and React Native.
DataSync
DataSync is an online data movement and discovery service that simplifies data migration and helps you quickly, easily, and securely transfer your file or object data to, from, and between Amazon Web Services storage services.
This API interface reference includes documentation for using DataSync programmatically. For complete information, see the *[DataSync User Guide](https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html)*.
[Installing](#installing)
---
To install the this package, simply type add or install @aws-sdk/client-datasync using your favorite package manager:
* `npm install @aws-sdk/client-datasync`
* `yarn add @aws-sdk/client-datasync`
* `pnpm add @aws-sdk/client-datasync`
[Getting Started](#getting-started)
---
### [Import](#import)
The AWS SDK is modulized by clients and commands.
To send a request, you only need to import the `DataSyncClient` and the commands you need, for example `ListAgentsCommand`:
```
// ES5 example const { DataSyncClient, ListAgentsCommand } = require("@aws-sdk/client-datasync");
```
```
// ES6+ example import { DataSyncClient, ListAgentsCommand } from "@aws-sdk/client-datasync";
```
### [Usage](#usage)
To send a request, you:
* Initiate client with configuration (e.g. credentials, region).
* Initiate command with input parameters.
* Call `send` operation on client with command object as input.
* If you are using a custom http handler, you may call `destroy()` to close open connections.
```
// a client can be shared by different commands.
const client = new DataSyncClient({ region: "REGION" });
const params = {
/** input parameters */
};
const command = new ListAgentsCommand(params);
```
#### [Async/await](#asyncawait)
We recommend using [await](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await)
operator to wait for the promise returned by send operation as follows:
```
// async/await.
try {
const data = await client.send(command);
// process data.
} catch (error) {
// error handling.
} finally {
// finally.
}
```
Async-await is clean, concise, intuitive, easy to debug and has better error handling as compared to using Promise chains or callbacks.
#### [Promises](#promises)
You can also use [Promise chaining](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises#chaining)
to execute send operation.
```
client.send(command).then(
(data) => {
// process data.
},
(error) => {
// error handling.
}
);
```
Promises can also be called using `.catch()` and `.finally()` as follows:
```
client
.send(command)
.then((data) => {
// process data.
})
.catch((error) => {
// error handling.
})
.finally(() => {
// finally.
});
```
#### [Callbacks](#callbacks)
We do not recommend using callbacks because of [callback hell](http://callbackhell.com/),
but they are supported by the send operation.
```
// callbacks.
client.send(command, (err, data) => {
// process err and data.
});
```
#### [v2 compatible style](#v2-compatible-style)
The client can also send requests using v2 compatible style.
However, it results in a bigger bundle size and may be dropped in next major version. More details in the blog post on [modular packages in AWS SDK for JavaScript](https://aws.amazon.com/blogs/developer/modular-packages-in-aws-sdk-for-javascript/)
```
import * as AWS from "@aws-sdk/client-datasync";
const client = new AWS.DataSync({ region: "REGION" });
// async/await.
try {
const data = await client.listAgents(params);
// process data.
} catch (error) {
// error handling.
}
// Promises.
client
.listAgents(params)
.then((data) => {
// process data.
})
.catch((error) => {
// error handling.
});
// callbacks.
client.listAgents(params, (err, data) => {
// process err and data.
});
```
### [Troubleshooting](#troubleshooting)
When the service returns an exception, the error will include the exception information,
as well as response metadata (e.g. request id).
```
try {
const data = await client.send(command);
// process data.
} catch (error) {
const { requestId, cfId, extendedRequestId } = error.$$metadata;
console.log({ requestId, cfId, extendedRequestId });
/**
* The keys within exceptions are also parsed.
* You can access them by specifying exception names:
* if (error.name === 'SomeServiceException') {
* const value = error.specialKeyInException;
* }
*/
}
```
[Getting Help](#getting-help)
---
Please use these community resources for getting help.
We use the GitHub issues for tracking bugs and feature requests, but have limited bandwidth to address them.
* Visit [Developer Guide](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/welcome.html)
or [API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.html).
* Check out the blog posts tagged with [`aws-sdk-js`](https://aws.amazon.com/blogs/developer/tag/aws-sdk-js/)
on AWS Developer Blog.
* Ask a question on [StackOverflow](https://stackoverflow.com/questions/tagged/aws-sdk-js) and tag it with `aws-sdk-js`.
* Join the AWS JavaScript community on [gitter](https://gitter.im/aws/aws-sdk-js-v3).
* If it turns out that you may have found a bug, please [open an issue](https://github.com/aws/aws-sdk-js-v3/issues/new/choose).
To test your universal JavaScript code in Node.js, browser and react-native environments,
visit our [code samples repo](https://github.com/aws-samples/aws-sdk-js-tests).
[Contributing](#contributing)
---
This client code is generated automatically. Any modifications will be overwritten the next time the `@aws-sdk/client-datasync` package is updated.
To contribute to client you can check our [generate clients scripts](https://github.com/aws/aws-sdk-js-v3/tree/main/scripts/generate-clients).
[License](#license)
---
This SDK is distributed under the
[Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0),
see LICENSE for more information.
[Client Commands (Operations List)](#client-commands-operations-list)
---
AddStorageSystem
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/addstoragesystemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/addstoragesystemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/addstoragesystemcommandoutput.html)
CancelTaskExecution
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/canceltaskexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/canceltaskexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/canceltaskexecutioncommandoutput.html)
CreateAgent
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createagentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createagentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createagentcommandoutput.html)
CreateLocationAzureBlob
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocationazureblobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationazureblobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationazureblobcommandoutput.html)
CreateLocationEfs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocationefscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationefscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationefscommandoutput.html)
CreateLocationFsxLustre
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocationfsxlustrecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationfsxlustrecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationfsxlustrecommandoutput.html)
CreateLocationFsxOntap
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocationfsxontapcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationfsxontapcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationfsxontapcommandoutput.html)
CreateLocationFsxOpenZfs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocationfsxopenzfscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationfsxopenzfscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationfsxopenzfscommandoutput.html)
CreateLocationFsxWindows
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocationfsxwindowscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationfsxwindowscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationfsxwindowscommandoutput.html)
CreateLocationHdfs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocationhdfscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationhdfscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationhdfscommandoutput.html)
CreateLocationNfs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocationnfscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationnfscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationnfscommandoutput.html)
CreateLocationObjectStorage
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocationobjectstoragecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationobjectstoragecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationobjectstoragecommandoutput.html)
CreateLocationS3
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocations3command.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocations3commandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocations3commandoutput.html)
CreateLocationSmb
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createlocationsmbcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationsmbcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createlocationsmbcommandoutput.html)
CreateTask
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/createtaskcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createtaskcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/createtaskcommandoutput.html)
DeleteAgent
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/deleteagentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/deleteagentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/deleteagentcommandoutput.html)
DeleteLocation
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/deletelocationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/deletelocationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/deletelocationcommandoutput.html)
DeleteTask
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/deletetaskcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/deletetaskcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/deletetaskcommandoutput.html)
DescribeAgent
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describeagentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describeagentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describeagentcommandoutput.html)
DescribeDiscoveryJob
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describediscoveryjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describediscoveryjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describediscoveryjobcommandoutput.html)
DescribeLocationAzureBlob
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocationazureblobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationazureblobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationazureblobcommandoutput.html)
DescribeLocationEfs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocationefscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationefscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationefscommandoutput.html)
DescribeLocationFsxLustre
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocationfsxlustrecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationfsxlustrecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationfsxlustrecommandoutput.html)
DescribeLocationFsxOntap
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocationfsxontapcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationfsxontapcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationfsxontapcommandoutput.html)
DescribeLocationFsxOpenZfs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocationfsxopenzfscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationfsxopenzfscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationfsxopenzfscommandoutput.html)
DescribeLocationFsxWindows
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocationfsxwindowscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationfsxwindowscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationfsxwindowscommandoutput.html)
DescribeLocationHdfs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocationhdfscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationhdfscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationhdfscommandoutput.html)
DescribeLocationNfs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocationnfscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationnfscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationnfscommandoutput.html)
DescribeLocationObjectStorage
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocationobjectstoragecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationobjectstoragecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationobjectstoragecommandoutput.html)
DescribeLocationS3
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocations3command.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocations3commandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocations3commandoutput.html)
DescribeLocationSmb
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describelocationsmbcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationsmbcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describelocationsmbcommandoutput.html)
DescribeStorageSystem
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describestoragesystemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describestoragesystemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describestoragesystemcommandoutput.html)
DescribeStorageSystemResourceMetrics
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describestoragesystemresourcemetricscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describestoragesystemresourcemetricscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describestoragesystemresourcemetricscommandoutput.html)
DescribeStorageSystemResources
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describestoragesystemresourcescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describestoragesystemresourcescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describestoragesystemresourcescommandoutput.html)
DescribeTask
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describetaskcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describetaskcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describetaskcommandoutput.html)
DescribeTaskExecution
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/describetaskexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describetaskexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/describetaskexecutioncommandoutput.html)
GenerateRecommendations
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/generaterecommendationscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/generaterecommendationscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/generaterecommendationscommandoutput.html)
ListAgents
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/listagentscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listagentscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listagentscommandoutput.html)
ListDiscoveryJobs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/listdiscoveryjobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listdiscoveryjobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listdiscoveryjobscommandoutput.html)
ListLocations
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/listlocationscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listlocationscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listlocationscommandoutput.html)
ListStorageSystems
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/liststoragesystemscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/liststoragesystemscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/liststoragesystemscommandoutput.html)
ListTagsForResource
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/listtagsforresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listtagsforresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listtagsforresourcecommandoutput.html)
ListTaskExecutions
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/listtaskexecutionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listtaskexecutionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listtaskexecutionscommandoutput.html)
ListTasks
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/listtaskscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listtaskscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/listtaskscommandoutput.html)
RemoveStorageSystem
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/removestoragesystemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/removestoragesystemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/removestoragesystemcommandoutput.html)
StartDiscoveryJob
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/startdiscoveryjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/startdiscoveryjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/startdiscoveryjobcommandoutput.html)
StartTaskExecution
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/starttaskexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/starttaskexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/starttaskexecutioncommandoutput.html)
StopDiscoveryJob
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/stopdiscoveryjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/stopdiscoveryjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/stopdiscoveryjobcommandoutput.html)
TagResource
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/tagresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/tagresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/tagresourcecommandoutput.html)
UntagResource
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/untagresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/untagresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/untagresourcecommandoutput.html)
UpdateAgent
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/updateagentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updateagentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updateagentcommandoutput.html)
UpdateDiscoveryJob
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/updatediscoveryjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatediscoveryjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatediscoveryjobcommandoutput.html)
UpdateLocationAzureBlob
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/updatelocationazureblobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatelocationazureblobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatelocationazureblobcommandoutput.html)
UpdateLocationHdfs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/updatelocationhdfscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatelocationhdfscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatelocationhdfscommandoutput.html)
UpdateLocationNfs
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/updatelocationnfscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatelocationnfscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatelocationnfscommandoutput.html)
UpdateLocationObjectStorage
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/updatelocationobjectstoragecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatelocationobjectstoragecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatelocationobjectstoragecommandoutput.html)
UpdateLocationSmb
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/updatelocationsmbcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatelocationsmbcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatelocationsmbcommandoutput.html)
UpdateStorageSystem
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/updatestoragesystemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatestoragesystemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatestoragesystemcommandoutput.html)
UpdateTask
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/updatetaskcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatetaskcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatetaskcommandoutput.html)
UpdateTaskExecution
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/classes/updatetaskexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatetaskexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-datasync/interfaces/updatetaskexecutioncommandoutput.html)
Readme
---
### Keywords
none |
lazysql | cran | R | Package ‘lazysql’
October 13, 2022
Type Package
Title Lazy SQL Programming
Version 0.1.3
Date 2016-03-11
Description Helper functions to build SQL statements
for dbGetQuery or dbSendQuery under program control.
They are intended to increase speed of coding and
to reduce coding errors. Arguments are carefully checked,
in particular SQL identifiers such as names of tables or columns.
More patterns will be added as required.
URL https://github.com/UweBlock/lazysql
BugReports https://github.com/UweBlock/lazysql/issues
License MIT + file LICENSE
LazyData TRUE
Imports checkmate (>= 1.7.2), magrittr, plyr
Suggests testthat
RoxygenNote 5.0.1
NeedsCompilation no
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2016-03-12 06:16:37
R topics documented:
date_betwee... 2
in_conditio... 3
lazysq... 4
natural_ke... 4
valid_identifier_rege... 5
date_between Create SQL string to select date between two given dates
Description
Create string with SQL BETWEEN expression for WHERE clause to select dates within the given range.
Usage
date_between(column_name, date_range)
Arguments
column_name [character(1)]
Name of data base column to select dates from.
date_range [Date(1:2)]
One or two dates giving the date range in which the dates should be enclosed
(closed interval). If only one date is given, it is taken for both upper and lower
limits.
Details
column_name must be a valid SQL identifier. It is validated to conform to the regular expression
returned by valid_identifier_regex.
Value
Character string to be used in SQL statement.
Author(s)
<NAME>
See Also
valid_identifier_regex.
Examples
date1 <- as.Date("2016-02-22")
date2 <- as.Date("2016-02-11")
# SQL expression for a date range
(sql_expr1 <- lazysql::date_between("STD_1", c(date1, date2)))
# SQL expression for a single date
(sql_expr2 <- lazysql::date_between("STD_1", date1))
# sample SQL statements
paste("select * from TEST_TABLE where", sql_expr1)
paste("select * from TEST_TABLE where", sql_expr2)
in_condition Create SQL string to select values included in a set of given values
Description
Create string with SQL IN expression for WHERE clause to select values included in a set of given
values.
Usage
in_condition(column_name, choices, negation = c("", "not"))
Arguments
column_name [character(1)]
Name of data base column to select values from.
choices [character(1:Inf)] or [integer(1:Inf)]
The values which must be matched. Character values must not contain any sin-
gle or double quotes to avoid problems with SQL syntax and for safety reasons.
negation [character(1)]
If "not" the selection is inverted to a NOT IN expression.
Details
column_name must be a valid SQL identifier. It is validated to conform to the regular expression
returned by valid_identifier_regex.
Value
Character string to be used in SQL statement.
Author(s)
<NAME>
See Also
valid_identifier_regex.
Examples
# SQL expressions
lazysql::in_condition("COL_1", 1:3)
lazysql::in_condition("COL_1", 1:3, "not")
lazysql::in_condition("COL_1", LETTERS[2:3])
lazysql::in_condition("COL_1", LETTERS[2:3], "not")
lazysql Lazy SQL programming
Description
Helper functions to build SQL statements for dbGetQuery or dbSendQuery under program control.
Details
More patterns will be added as required.
Author(s)
<NAME>
See Also
date_between, in_condition, natural_key
natural_key Create SQL string for joining on matching natural keys
Description
Create string with SQL expressions for WHERE clause to join two tables on the given columns.
Usage
natural_key(table_names, key_columns)
Arguments
table_names [character(2)]
Name of data base tables to be joined.
key_columns [character(1:Inf)]
Names of key columns in both tables.
Details
The names of tables and key columns must be valid SQL identifiers. They are validated to conform
to the regular expression returned by valid_identifier_regex.
The SQL string is created in 3 steps:
1. Combine table names with key names, eg, "PRL.FLIGHT_NR".
2. Create logical expressions, eg, "PRL.FLIGHT_NR = PRL_SSR.FLIGHT_NR"
3. Concatenate logical expressions by "and" to form final SQL esxpression.
Value
Character string to be used in SQL statement.
Note
The current implementation assumes that key columns have the same names in both tables.
Author(s)
Uwe Block
See Also
valid_identifier_regex.
Examples
# SQL expression
(sql_expr <- lazysql::natural_key(c("TAB1", "tab_2"),c("COL1", "col_2")))
# sample SQL JOIN statement
paste("select * from TAB1, TAB2 where", sql_expr)
valid_identifier_regex
Regex pattern to validate SQL identifier names
Description
Returns a regular expression to validate unquoted SQL identifiers.
Usage
valid_identifier_regex()
Details
Valid SQL identifiers must begin with an alphabetic character followed by alphanumeric characters
or underscores "_".
Value
Character string with regular expression.
Note
The current implementation doesn’t allow any other special characters in SQL identfiers or quoted
SQL identifiers for safety reasons. In future releases, valid SQL identifiers might be defined de-
pending on the target database system.
Author(s)
Uwe Block
References
ORACLE Database SQL Language Reference.
Examples
lazysql::valid_identifier_regex() |
EventDetectR | cran | R | Package ‘EventDetectR’
October 12, 2022
Version 0.3.5
Title Event Detection Framework
Description Detect events in time-series data. Combines multiple well-known R packages like 'fore-
cast' and 'neuralnet' to deliver an easily configurable tool for multivariate event detection.
Encoding UTF-8
LazyData yes
Type Package
ByteCompile TRUE
BugReports https://github.com/frehbach/EventDetectR/issues
URL https://github.com/frehbach/EventDetectR
Repository CRAN
Depends R (>= 3.1.0)
Imports imputeTS, forecast, ggplot2, gridExtra, neuralnet
Suggests testthat, utils, caret, e1071
License GPL-3
RoxygenNote 7.1.1
NeedsCompilation no
Author <NAME> [aut],
<NAME> [aut, cre],
<NAME> [aut],
<NAME> [aut] (<https://orcid.org/0000-0002-0085-1804>)
Maintainer <NAME> <<EMAIL>>
Date/Publication 2020-10-10 15:50:02 UTC
R topics documented:
EventDetectR-packag... 2
buildEDMode... 2
detectEvent... 4
geccoIC2018Tes... 6
geccoIC2018Trai... 6
getSupportedModel... 6
getSupportedPostProcessor... 7
getSupportedPreparation... 7
plot.edObjec... 8
print.edObjec... 8
qualityStatistic... 9
simulateEvent... 9
stationBDat... 11
EventDetectR-package EventDetectR-package description
Description
Detect events/ anomalies in time-series data.
Details
The EventDetectR package enables detection of events/ anomalies in multivariate time-series data.
It combines multiple well-known R packages like ’forecast, ’neuralnet’ to deliver an easily config-
urable tool for event detection.
buildEDModel build Event Detection Model
Description
Builds an event detection object (edObject) containing all models and configurations that are used
to detect events in given data.
Usage
buildEDModel(
x,
dataPrepators = "ImputeTSInterpolation",
dataPreparationControl = list(),
buildModelAlgo = "ForecastETS",
buildForecastModelControl = list(),
buildNeuralNetModelControl = list(),
postProcessors = "bedAlgo",
postProcessorControl = list(),
ignoreVarianceWarning = FALSE,
oldModel = NULL
)
Arguments
x data.frame containing initial data on which the model will be fitted. Data should
be free of events. The data should not include a timestamp column
dataPrepators string or vector of strings, that defines which preparators to use. Lists are not
accepted. Usage Example: dataPreparators = "ImputeTSInterpolation" results
in the usage of imputeTS::na.interpolation as a data preparator. All possible
preparators are listed via: getSupportedPreparations() Can also be set to NULL
in order to shut off data preparation
dataPreparationControl
list, control-list containing all additional parameters that shall be passed to the
dataPreparators.
buildModelAlgo string, model name to be used. All possible preparators are listed via: getSup-
portedModels().
buildForecastModelControl
list, control-list containing all additional parameters that shall be passed to fore-
cast modeling algorithm
buildNeuralNetModelControl
list, control-list containing all additional parameters that shall be passed to the
neuralnet modeling algorithm
postProcessors string or vector of strings, that defines which postProcessors to use. Lists are
not accepted. Usage Example: postProcessors = "bedAlgo" results in the usage
of bed as a event postProcessing tool. All possible preparators are listed via:
getSupportedPostProcessors() Can also be set to NULL in order to shut off data
postProcessing
postProcessorControl
list, control-list containing all additional parameters that shall be passed to the
postProcessirs.
ignoreVarianceWarning
Ignores the continously appearing warning for missing variance in some variable
columns given a smaller windowSize
oldModel If another model was previously fitted it can be passed to the next model fit. By
doing so the eventHistory is preserved
Value
model, event detection object (edObject) containing all models and configurations that are used to
detect events in given data.
Examples
## build a simple event detection model with standard configuration
x <- stationBData[100:200,-1]
buildEDModel(x,ignoreVarianceWarning = TRUE)
## Set up a more complex event detection model defining some additional configuration
buildEDModel(x, buildModelAlgo = "ForecastArima",ignoreVarianceWarning = TRUE)
## Set up a multivariate neuralnetwork model
buildEDModel(x, buildModelAlgo = "NeuralNetwork",ignoreVarianceWarning = TRUE)
detectEvents detectEvents in a given data.frame
Description
detectEvents builds a prediction model (edObject) on the first ’windowSize’ points of the given
data x. The next ’nIterationRefit’ data-points are classified as ’Event’ or not. The window is moved
iteratively and the next models are fitted. The first ’windowSize’ points will always be classified as
no Event and should only contain ’clean’ data
Usage
detectEvents(
x,
windowSize = 100,
nIterationsRefit = 1,
verbosityLevel = 0,
dataPrepators = "ImputeTSInterpolation",
dataPreparationControl = list(),
buildModelAlgo = "ForecastETS",
buildForecastModelControl = list(),
buildNeuralNetModelControl = list(),
postProcessors = "bedAlgo",
postProcessorControl = list(),
ignoreVarianceWarning = TRUE
)
Arguments
x data.frame, data which shall be classified as event or not
windowSize amount of data points to consider in each prediction model
nIterationsRefit
amount of points into the future which will be predicted without fitting a new
model. E.g. if nIterationsRefit = 10 then the next five dataPoints are classified
without refitting.
verbosityLevel Print output of function progress. 0 -> No output, 1 -> every 100th model build-
ing iteration, 2 -> every 10th, 3 -> every iteration
dataPrepators string or vector of strings, that defines which preparators to use. Lists are not
accepted. Usage Example: dataPreparators = "ImputeTSInterpolation" results
in the usage of imputeTS::na.interpolation as a data preparator. All possible
preparators are listed via: getSupportedPreparations()
dataPreparationControl
list, control-list containing all additional parameters that shall be passed to the
dataPreparators.
buildModelAlgo string, model name to be used. All possible preparators are listed via: getSup-
portedModels().
buildForecastModelControl
list, control-list containing all additional parameters that shall be passed to the
forecast modelling algo.
buildNeuralNetModelControl
list, control-list containing all additional parameters that shall be passed to the
neuralnet modelling algo.
postProcessors string or vector of strings, that defines which postProcessors to use. Lists are
not accepted. Usage Example: postProcessors = "bedAlgo" results in the usage
of bed as a event postProcessing tool. All possible preparators are listed via:
getSupportedPostProcessors()
postProcessorControl
list, control-list containing all additional parameters that shall be passed to the
postProcessirs.
ignoreVarianceWarning
Ignores the continously appearing warning for missing variance in some variable
columns given a smaller windowSize
Value
edsResults edObject, list of results. $classification -> data.frame containing the T/F event classifi-
cation
Examples
## Run event detection with default settings:
def <- detectEvents(x = stationBData[1:100,-1])
## Refit the model at every new datapoint,
## have someoutput with verbosityLevel = 2 and ignore
## the variance warning
ed <- detectEvents(stationBData[1:110,-1],nIterationsRefit = 1,
verbosityLevel = 2,ignoreVarianceWarning = TRUE)
## Switch to another model: Arima
ed2 <- detectEvents(stationBData[1:110,-1],nIterationsRefit = 1,
verbosityLevel = 0,ignoreVarianceWarning = TRUE,
buildModelAlgo = "ForecastArima")
## Switch to multivariate model: NeuralNetwork
ed3 <- detectEvents(stationBData[1:110,-1],nIterationsRefit = 1, buildModelAlgo = "NeuralNetwork")
geccoIC2018Test geccoIC2018Test
Description
2018s Test set of the gecco industrial challenge - http://www.spotseven.de/gecco/gecco-challenge/
geccoIC2018Train geccoIC2018Train
Description
2018s train set of the gecco industrial challenge - http://www.spotseven.de/gecco/gecco-challenge/
getSupportedModels getSupportedModels
Description
Get a list of all data modelling methods that are currently supported in package ’eventDetectR’.
Usage
getSupportedModels()
Value
allSupportedModels a list of strings with each supported method name. The strings can be copied
and used in calls to ’eventDetect’ or ’buildEDModel’
Examples
models <- getSupportedModels()
getSupportedPostProcessors
getSupportedPostProcessors
Description
Get a list of all data postprocessing methods that are currently supported in package ’eventDetectR’.
Usage
getSupportedPostProcessors()
Value
allSupportedPostProcessors a list of strings with each supported method name. The strings can be
copied and used in calls to ’eventDetect’ or ’buildEDModel’
Examples
preps <- getSupportedPostProcessors()
getSupportedPreparations
getSupportedPreparations
Description
Get a list of all data preparation methods that are currently supported in package ’eventDetectR’.
Usage
getSupportedPreparations()
Value
allSupportedPreparations a list of strings with each supported method name. The strings can be
copied and used in calls to ’eventDetect’ or ’buildEDModel’
Examples
preps <- getSupportedPreparations()
plot.edObject Plot an Event Detection Object
Description
Plot an Event Detection Object
Usage
## S3 method for class 'edObject'
plot(x, varsToPlot = names(edObject$classification), ...)
Arguments
x edObject
varsToPlot vars
... Additional parameters
Value
A Plot
print.edObject Print an Event Detection Object
Description
Prints the last classification results for an event detection object. If ’nLast’ (integer) is given, it
specifies the amount of rows to be printed.
Usage
## S3 method for class 'edObject'
print(x, ...)
Arguments
x edObject, the event detection object that shall be printed
... any additional parameters
qualityStatistics qualityStatistics
Description
Wrapper function for caret::confusionMatrix. qualityStatistics calculates statistics for judging the
quality of the eventDetection based on the fitted edModel and a reference dataset
Usage
qualityStatistics(edObject, reference)
Arguments
edObject The eventdetection object you obtain by running ’detectEvents’
reference true/false vector, reference vector based on labeled data: which datapoints are
real events.
Value
list, Confusion Matrix and Statistics
Examples
train <- geccoIC2018Train[15000:17000,]
edObject <- detectEvents(train[,-c(1,11)],windowSize = 1000,
nIterationsRefit = 500,verbosityLevel = 2,
postProcessorControl = list(nStandardDeviationseventThreshold = 3))
qualityStatistics(edObject, train$EVENT)
simulateEvents Imposes simulated events on the top of the data
Description
Simulates Events on columns of a data frame or a matrix by applying different transformations. The
events of type sinusoidal, square, binomial or ramp can be used.
Usage
simulateEvents(
Data,
Params,
Event_type,
Event_strength = NULL,
Start_index = NULL,
Event_duration = NULL,
Percentage = NULL
)
Arguments
Data Data frame or matrix containing the data to which the events will be introduced
Params Numeric vector or vector of strings indicating the column names (in case Data is
a data frame) or the column numbers (in case Data is a matrix) of the parameters
in which an event will be simulated
Event_type String vector indicating which type of transformation the parameters will un-
dergo. Current valid options include sinusoidal, square, ramp and slowsinu-
soidal. If Params contains more that one element and Event_type only contains
one element the same transformation will be applied to all given Params
Event_strength (Optional) Numeric Vector indicating the amplitude. Only valid for sinusoidal
and square transformations. When specified for other type of transformations
it will have no effect. However it must have the same number of elements as
Params.
Start_index Numeric, indicates the index where the event should start
Event_duration Numeric, indicates the number of steps the transformation should last. Default
is 100
Percentage (Optional) Numeric value from 0 to 1. Alternative input indicating the percent-
age of data that should be affected by the transformation. Either Event_duration
or Percentage should be especified.
Value
Matrix or data frame containing the selected columns with simulated events
Examples
#Generate event of type sinusoidal and ramp on two columns of the stationBData data set
simupar<-c("B_PH_VAL","B_TEMP_VAL")
SimulatedEvents<-simulateEvents(stationBData,
simupar,Event_type = c("sinusoidal","ramp"),
Start_index = 2500)
#When specifiying Event_strength the lenght of the vector needs to match the number
#of elements in Params.
SimulatedEvents<-simulateEvents(stationBData,
simupar,Event_type = c("sinusoidal","ramp"),
Start_index = 2500,
Percentage = 0.2,
Event_strength = c(4,1))
stationBData stationBData
Description
Data for package testing purposes |
LumberjackConsole | cocoapods | Objective-C | lumberjackconsole
===
Description
---
LumberjackConsole is a powerful logging tool for iOS or macOS applications. It provides an easy way to log messages during app development and debugging.
Features
---
* Convenient logging to the console
* Customizable log levels (verbose, debug, info, warning, error)
* Flexible formatting options
* Support for different log destinations (console, file, email, etc.)
* Support for logging network requests and responses
* Integration with other logging frameworks (CocoaLumberjack, SwiftyBeaver, etc.)
Installation
---
To install LumberjackConsole in your project, follow these steps:
1. Open your project in Xcode 2. Go to “File” > “Swift Packages” > “Add Package Dependency…”
3. Enter the package repository URL: https://github.com/XYZ/lumberjackconsole 4. Select the desired version 5. Click “Next”
6. Choose the target where you want to add the package 7. Click “Finish”
Usage
---
To start using LumberjackConsole in your code, import the module:
```
// Swift import LumberjackConsole
// Objective-C
@import LumberjackConsole;
```
Then, configure the logger and start logging messages:
```
// Configure logger LumberjackConsole.shared.logLevel = .debug LumberjackConsole.shared.logDestination = .console
// Log messages LumberjackConsole.shared.log(.info, "This is an info message")
LumberjackConsole.shared.log(.error, "An error occurred: \(error.localizedDescription)")
```
Advanced Configuration
---
For advanced configuration options, you can modify the following properties of the shared `LumberjackConsole` instance:
* `logLevel`: Sets the minimum log level to display
* `logDestination`: Sets the log destination (console, file, etc.)
* `logFormatter`: Sets the log message formatter
* `networkLoggingEnabled`: Enables or disables logging network requests and responses
Resources
---
For more information on using LumberjackConsole, refer to the following resources:
* [GitHub Repository](https://github.com/XYZ/lumberjackconsole)
* [Official Documentation](https://www.example.com/documentation) |
feeluown | readthedoc | Python | feeluown 3.7.7 documentation
[feeluown](index.html#document-index)
---
FeelUOwn - feel your own[¶](#feeluown-feel-your-own)
===
FeelUOwn 是一个用户友好、可玩性强的播放器
它主要有以下特性:
* 安装简单,使用方便,新手友好
* 默认提供国内各音乐平台插件(网易云、虾米、QQ)
* 与 tmux,Emacs,Vim 等工具可以方便集成
* 核心模块有较好文档和测试覆盖
快速上手[¶](#id1)
---
FeelUOwn 使用 Python 3 进行开发,目前默认使用 mpv 作为其播放引擎,
基于 PyQt5 构建 GUI。
### 安装[¶](#id2)
#### Ubuntu[¶](#ubuntu)
下面命令在 Ubuntu 18.04 中测试通过,理论上其它 Linux 发行版 安装流程类似,只是一些包的名字可能有些许差别。
```
# 安装 Python 3 和 pip3(大部分系统已经安装好了)
sudo apt-get install python3 python3-pip
# 安装 libmpv1 sudo apt-get install libmpv1
# 安装 PyQt5 sudo apt-get install python3-pyqt5 sudo apt-get install python3-pyqt5.qtopengl sudo apt-get install python3-pyqt5.qtsvg
# 安装 dbus-python sudo apt-get install python3-dbus sudo apt-get install python3-dbus.mainloop.pyqt5
# 安装 feeluown (是一个 Python 包)
# --upgrade 代表安装最新版,--user 代表不安装到系统目录 pip3 install 'feeluown>=3.0[battery]' --upgrade --user pip3 install pyopengl
# 运行 feeluown -h 来测试安装是否成功
# 如果提示 Commmand Not Found,请查看文档「常见问题」部分 feeluown -h
# 生成桌面图标 feeluown-genicon
# (可能还需要安装)使用 fcitx 输入法的用户可能需要安装
# 否则有可能不能在 GUI 中切换输入法 sudo apt-get install fcitx-frontend-qt5
```
#### macOS[¶](#macos)
```
# macOS Monterey(版本12)实测可以安装,版本 11 可能不能正常安装
# (https://github.com/feeluown/FeelUOwn/issues/421)
brew tap feeluown/feeluown brew install feeluown --with-battery # 更多选项见 `brew info feeluown`
feeluown genicon # 在桌面会生成一个 FeelUOwn 图标
```
#### Windows[¶](#windows)
你可以从 [发布页](https://github.com/feeluown/distribution/releases) 直接下载打包好的压缩包。
也可以按照如下步骤手动进行安装:
1. 安装 Python 3,参考 链接 <https://www.python.org/downloads/windows/> (请勿从应用商店安装)
2. 下载 [mpv-1.dll](https://github.com/feeluown/FeelUOwn/releases/latest) ,
将 mpv-1.dll 放入 `C:\Windows\System32` 目录。
3. 安装 PyQt5,在 cmd 中运行 `pip3 install PyQt5 -i https://pypi.douban.com/simple`
4. 安装 feeluown,在 cmd 中运行 `pip3 install feeluown[battery,win32]`
5. 在 cmd 中运行 `python -m feeluown genicon` 命令,可以生成桌面图标
#### Arch Linux[¶](#arch-linux)
<https://archlinux.org/packages/community/any/feeluown/#### Gentoo[¶](#gentoo)
<https://github.com/microcai/gentoo-zh/tree/master/media-sound/feeluown#### Debian[¶](#debian)
<https://github.com/coslyk/debianopt-repo#### NixOS[¶](#nixos)
<https://github.com/berberman/flakes### 基本使用[¶](#id4)
大家有几种方式启动 FeelUOwn:
1. 直接双击桌面 FeelUOwn 图标,这时启动 GUI/Daemon 混合模式 2. 在命令行中运行 `feeluown` 命令,这时也是混合模式 3. 在命令行中运行 `feeluown -nw` 命令,这时是 Daemon 模式
Daemon 模式的使用方法,这里简单说明:
(提示:如果不熟悉命令行,DAEMON 模式可能会有一定的折腾)
```
feeluown -nw # 使用 Daemon 模式启动 feeluown fuo status # 查看播放器状态 fuo search 周杰伦 # 搜索歌曲 fuo play fuo://netease/songs/470302665 # 播放:(世界が终るまでは…)《灌篮高手》
```
如果大家对 [NetCat](https://en.wikipedia.org/wiki/Netcat) 工具熟悉
```
nc localhost 23333
# 输入 `status` 命令,可以查看播放器状态
# 输入 `fuo play fuo://netease/songs/470302665` 可以播放音乐
```
关于 Daemon 更多使用细节,大家可以参考运行 `fuo -h` 来查看帮助文档
特性[¶](#id1)
---
### 安装相对简单,新手友好[¶](#id2)
> 参考 [快速上手](index.html#document-quickstart) 文档进行安装。
### 提供国内各音乐平台插件[¶](#id3)
> * [网易云音乐](https://github.com/feeluown/feeluown-netease)
> * [虾米音乐](https://github.com/feeluown/feeluown-xiami)
> * [QQ 音乐](https://github.com/feeluown/feeluown-qqmusic)
### 基于文本的歌单[¶](#id6)
> 将下面内容拷贝到文件 `~/.FeelUOwn/collections/favorite.fuo` 中,重启 FeelUOwn 就可以看到此歌单:
> ```
> fuo://netease/songs/16841667 # No Matter What - Boyzone
> fuo://netease/songs/65800 # 最佳损友 - 陈奕迅
> fuo://xiami/songs/3406085 # Love Story - Taylor Swift
> fuo://netease/songs/5054926 # When You Say Noth… - Ronan Keating
> fuo://qqmusic/songs/97773 # 晴天 - 周杰伦
> fuo://qqmusic/songs/102422162 # 给我一首歌的时间 … - 周杰伦,蔡依林
> fuo://xiami/songs/1769834090 # Flower Dance - DJ OKAWARI
> ```
> 你可以通过 gist 来分享自己的歌单,也可以通过 Dropbox 或 git 来在不同设备上同步这些歌单。
### 支持读取 fuorc 文件[¶](#fuorc)
> 你配置过 `.emacs` 或者 `.vimrc` 吗? `.fuorc` 和它们一样强大!
> 参考 [配置文件](index.html#document-fuorc) 文档来编写自己的 rc 文件吧~
### 提供基于 TCP 的控制协议[¶](#tcp)
> 比如:
> ```
> 查看播放器状态 echo status | nc localhost 23333
> 暂停播放 echo status | nc localhost 23333
> 搜索歌曲 echo "search 周杰伦" | nc localhost 23333
> ```
> 因此,它 **可以方便的与 Tmux, Emacs, Slack 等常用程序和软件集成**
> > > * [Emacs 简单客户端](https://github.com/feeluown/emacs-fuo) ,
> > [DEMO 演示视频](https://www.youtube.com/watch?v=-JFXo0J5D9E)
> > * Tmux 集成截图
> > * Slack 集成截图 [(在 fuorc 添加配置)](https://github.com/cosven/rcfiles/blob/498dcef385a20d5e0e5fbf06473f75769112d30c/.fuorc#L19)
> > > ### 支持无 GUI 模式启动[¶](#gui)
配置文件[¶](#id1)
---
类似 Vim 的 `.vimrc` 、Emacs 的 `init.el` ,feeluown 也有自己的配置文件 `.fuorc` 。
fuorc 文件是一个 Python 脚本,它位于 `~/.fuorc` 目录下,我们可以在其中使用任意 Python 语法。
feeluown 启动时,第一步就是加载并解析该配置文件。通常,我们可以在配置文件中作以下事情:
1. 配置部分选项 2. 定制一些小功能
一个 fuorc 文件示例:
```
import os
# 自定义配置 config.THEME = 'dark'
config.COLLECTIONS_DIR = '~/Dropbox/public/music'
config.AUDIO_SELECT_POLICY = '>>>'
# 一个小功能:切换歌曲时,发送系统通知 def notify_song_changed(song):
if song is not None:
title = song.title_display
artists_name = song.artists_name_display
song_str = f'{title}-{artists_name}'
os.system(f'notify-send "{song_str}"')
when('app.playlist.song_changed', notify_song_changed)
# 让编辑器识别这是一个 Python 文件
#
# Local Variables:
# mode: python
# End:
#
# vim: ft=python
```
### 原理简述[¶](#id2)
feeluown 使用 Python 的 exec 方法来加载(执行)配置文件。执行时,
会暴露 config 对象和部分函数到这个作用域中。
#### 函数[¶](#id3)
目前暴露到该作用域的函数有
`feeluown.fuoexec.``add_hook`(*signal_symbol: str*, *func: Callable*, *use_symbol: bool = False*, ***kwargs*)[[source]](_modules/feeluown/fuoexec/functions.html#add_hook)[¶](#feeluown.fuoexec.add_hook)
add hook on signal
| Parameters: | * **signal_symbol** – app.{object}.{signal_name} .
* **func** – Signal receiver.
* **use_symbol** – Whether to connect the signal to the symbol of the receiver.
If this is true, the real receiver is lazy found by the symbol, and the signal connects to a symbol instead of a function object.
If this is false, problem may occur when the rcfile is reloaded.
because there the signal connects to two *same* receivers.
* **kwargs** – This is directly passed to Signal.connect.
|
```
>>> def func(): pass
>>> add_hook('app.initialized', func)
```
New in version 3.8: The *kwargs* keyword argument.
`feeluown.fuoexec.``rm_hook`(*signal_symbol: str*, *slot: Callable*, *use_symbol: bool = False*)[[source]](_modules/feeluown/fuoexec/functions.html#rm_hook)[¶](#feeluown.fuoexec.rm_hook)
Remove slot from signal.
If slot_symbol is not connected, this does nothing.
`feeluown.fuoexec.``source`(*filepath: str*)[[source]](_modules/feeluown/fuoexec/functions.html#source)[¶](#feeluown.fuoexec.source)
Exec a py file.
#### config 对象[¶](#config)
`config` 是 [`feeluown.config.Config`](#feeluown.config.Config) 的实例,常见使用场景有两种:
```
>>> theme = config.THEME # 获取配置项的值
>>> config.THEME = 'dark' # 设置配置项的值
```
目前支持的配置项如下
**通用配置项**
| 名称 | 类型 | 默认值 | 描述 |
| --- | --- | --- | --- |
| DEBUG | `bool` | `False` | 是否为调试模式 |
| MODE | `str` | `0x0000` | CLI or GUI 模式 |
| THEME | `str` | `auto` | auto/light/dark |
| COLLECTIONS_DIR | `str` | `''` | 本地收藏所在目录 |
| LOG_TO_FILE | `bool` | `True` | 将日志输出到文件中 |
| AUDIO_SELECT_POLICY | `str` | `hq<>` | [`feeluown.media.Quality.SortPolicy`](index.html#feeluown.media.Quality.SortPolicy) |
| VIDEO_SELECT_POLICY | `str` | `hd<>` | [`feeluown.media.Quality.SortPolicy`](index.html#feeluown.media.Quality.SortPolicy) |
**MPV 播放器配置项** (使用 MPV 做为播放引擎时生效)
| 名称 | 类型 | 默认值 | 描述 |
| --- | --- | --- | --- |
| MPV_AUDIO_DEVICE | `str` | `auto` | MPV 播放设备 |
*class* `feeluown.config.``Config`[[source]](_modules/feeluown/config.html#Config)[¶](#feeluown.config.Config)
配置模块
用户可以在 rc 文件中配置各个选项的值
Roadmap[¶](#roadmap)
---
**FeelUOwn 项目发展方向**
1. 一个好用的、能 hack 的播放器 2. 一个优秀的 Python 项目
### 2020 下半年[¶](#id1)
#### 帮助用户发现音乐以及背后的故事[¶](#id2)
1. 支持查找类似歌曲?
2. 集成豆瓣、百科、网易云评论等平台资源?
3. 支持集成 bilibili/youtube 等视频资源?
#### 已有功能优化[¶](#id3)
1. 系统托盘?
2. fuo 文件功能丰富?
3. 更好地找候选歌曲?
#### 代码结构优化[¶](#id4)
1. 去除 feeluown 概念,按照 daemon 和 gui 两种模式来组织代码 2. 调研 type hint 的必要性与可行性?
#### fuo 协议优化[¶](#fuo)
常见问题[¶](#id1)
---
### 使用相关问题[¶](#id2)
#### 安装完成后,运行 feeluown 提示 Command ‘feeluown’ not found[¶](#feeluown-command-feeluown-not-found)
一般来说,安装之后, feeluown 命令会在 ~/.local/bin/ 目录下。
可以通过下面命令查看目录下是否有 feeluown:
```
ls -l ~/.local/bin/feeluown
```
如果输出正常则说明安装已经成功了, 大家可以修改 PATH 环境变量即可。
如果是使用 bash 或者 zsh,大家可以在 ~/.bashrc 或者 ~/.zshrc 文件中加入一行:
```
export PATH=~/.local/bin:$PATH
```
然后重新进入 shell,下次就可以直接运行 feeluown 了。
### 开发相关问题[¶](#id3)
本地开发快速上手[¶](#id1)
---
首先,非常感谢您愿意贡献自己的力量让 FeelUOwn 播放器变得更好~
### 推荐的开发流程[¶](#id2)
这里假设读者已经对 Git 和 Python 相关工具比较熟悉
1. 在 GitHub 上 fork 一份代码到自己名下 2. clone 自己 fork 的代码: `git clone [email protected]:username/FeelUOwn.git`
3. 在 Home 目录或者项目目录下创建一个虚拟环境(假设你已经安装 Python 3)
```
# 以在 Home 目录下创建虚拟环境为例
# 创建 ~/.venvs 目录,如果之前没有创建的话 mkdir -p ~/.venvs
# 创建虚拟环境(大家也可以选择用 virtualenv)
python3 -m venv ~/.venvs/fuo
# 激活虚拟环境 source ~/.venvs/fuo/bin/activate
# 安装项目依赖 pip3 install -e .
```
Note
在 Linux 或者 macOS 下,大家一般都是用 apt-get 或者 brew 将 `PyQt5` 安装到 Python 系统包目录,
也就是说,在虚拟环境里面, **不能 import 到 PyQt5 这个包** 。建议的解决方法是:
1. 创建一个干净的虚拟环境(不包含系统包)
2. `pip3 install -e .` 安装项目以及依赖 3. 将虚拟环境配置改成包含系统包,将 `~/.venvs/fuo/pyvenv.cfg`
配置文件中的 `include-system-site-packages` 字段的值改为 `true`
这样可以尽量避免包版本冲突、以及依赖版本过低的情况
4. 以 debug 模式启动 feeluown
```
# 运行 feeluown -h 查看更多选项 feeluown --debug
```
5. 修改代码 6. push 到自己的 master 或其它分支 7. 给 FeelUOwn 项目提交 PR 8. (可选)在开发者群中通知大家进行 review
### 程序启动流程[¶](#id3)
`feeluown(fuo)` 命令入口文件为 `feeluown/entry_points/run.py` ,主函数为 run 函数。
程序架构[¶](#id1)
---
FeelUOwn 中有一个核心对象 `app`, 它是我们进行几乎一切操作的入口。
在初始化 app 时,我们会实例化一些类,比如 Library/LiveLyric, 这些实例工作不依赖 app 对象,我们把它们放在 feeluown 包中。另外,我们也会创建很多 Manager 实例,
比如 PluginManager/HotkeyManager 等,它们往往都是依赖 app 的,
我们目前将它们各自作为一个模块放在 feeluown 包中。
### 主要模块[¶](#id2)
| 稳定 | 名字 | 模块 |
| --- | --- | --- |
| 🔴 | 音乐资源模型 | `feeluown.models` |
| 🔴 | 音乐库 | `feeluown.library.Library` |
| 🔴 | 播放器 | `feeluown.player` |
| 🔴 | fuo 协议 | `feeluown.protocol.FuoProcotol` |
| 🔴 | 版本 | `feeluown.version.VersionManager` |
| 🔴 | 小提示管理 | `feeluown.tips.TipsManager` |
| 🔴 | 本地收藏管理 | `feeluown.collection.CollectionManager` |
| 🔴 | 浏览历史记录 | `feeluown.browser` |
| 🔴 | 快捷键管理 | `feeluown.hotkey.HotkeyManager` |
| 🔴 | 图片管理 | `feeluown.image` |
| 🔴 | 资源提供方 UI | `feeluown.gui.uimodels.ProviderUiManager` |
| 🔴 | 我的音乐 UI | `feeluown.gui.uimodels.MyMusicUiManager` |
| 🔴 | 歌单列表 UI | [`feeluown.gui.uimodels.playlist`](index.html#module-feeluown.gui.uimodels.playlist) |
### 界面[¶](#id3)
FeelUOwn 启动时有两种模式可以选择,CLI 模式和 GUI 模式,大部分 Manager 可以在两种模式下工作,也有一部分 Manager只在 GUI 模式下工作,这些 Manager 往往和 UI 操作相关。
FeelUOwn UI 部分的核心对象是 `app.ui`, 我们下面就从界面开始,
来详细了解 FeelUOwn 程序的整体架构。
整个 GUI 区域划分比较简单和规整,下图大致的描述了 GUI 的几个主要组成部分。
图中文字部分对应的都是代码中的变量,它们也比较好的反映了对应区域的功能。
一开始对项目可能不是特别熟悉,大家可以对照这个图来看代码。
从区域划分来看,程序主界面主要分为四大块(蓝色部分):
1. `magicbox` : 用户搜索、显示用户操作通知、执行 fuo 命令、
执行 Python 代码相关操作都在此组件中完成 2. `left_panel` : 显示音乐库、用户操作历史记录、用户歌单列表 3. `right_panel` : 目前显示歌单列表详情、歌手详情等。
之后可能会支持更多其实形式的展示:比如批量展示专辑。
4. `pc_panel` : 与播放器相关的控制部分,主要是播放/暂停、进度条、
音量调节、显示当前播放列表、修改播放模式等操作按钮。
各大块可以拆分成小块(红色部分):
* **left_panel 区域**
+ `provider_view` 组件展示应用支持的音乐提供方
+ `histories_view` 组件展示用户浏览记录
+ `playlists_view` 组件展示用户歌单列表
* **right_panel 区域**
+ `songs_table` 批量展示歌曲,比如:歌单中的歌曲、搜索结果的歌曲部分等,
+ `table_overview` 是对 songs_table 的概览,由封面图和描述组成。
接口参考手册[¶](#id1)
---
这个参考手册描述了 feeluown 的几大组成部分,各组成部分的主要模块,
以及模块的相对稳定接口。
### 播放模块[¶](#media-player)
播放模块由两部分组成:播放列表(*Playlist*)和播放器(*Player*)。
播放列表维护了一个媒体资源集合,并提供若干资源获取接口,播放器通过接口获取资源,
然后进行播放。这些接口包括: `current_song`, `next_song`, `previous_song` 。
播放列表吐资源给播放器的时候,是有一定的顺序的,我们称之为回放模式(*PlaybackMode*)。
回放模式有四种:单曲循环;顺序;循环;随机。上述三个资源访问接口吐资源的顺序都会受回放模式影响。
当播放列表 `current_song` 变化时,它会发出 `song_changed` 信号,播放器会监听该信号,
从而播放、停止(song 为空时)或者切换歌曲。播放歌曲时,播放器的状态(*State*)会发生变化,
当没有歌曲播放的时候,播放器为停止(stopped)状态,有歌曲播放的时候,为正在播放状态。
当一首歌开始播放后,播放器会发出 `media_changed` 信号,
当一首歌曲播放完毕时,播放器会发出 `song_finished` 信号。
播放列表和播放器之间的调用关系如下:
```
Playlist (Mpv)Player
current_song.setter ---[song_changed]---> play_song
^ |
| | <song==None>
|<- play_song |---> stop
|<- play_next/play_previous |
|<- play_next <-| prepare_media
|<- replay <-| <mode==one_loop> |
| v
| play ---[media_changed]--->
[song_finished]
|
|
--- event(END_FILE)
^
|
MpvPlayer
```
### 通用管理模块[¶](#module-feeluown.task)
`feeluown.task.``is_in_loop_thread`()[[source]](_modules/feeluown/task.html#is_in_loop_thread)[¶](#feeluown.task.is_in_loop_thread)
check if current thread has event loop
*class* `feeluown.task.``TaskKind`[[source]](_modules/feeluown/task.html#TaskKind)[¶](#feeluown.task.TaskKind)
An enumeration.
`preemptive` *= 'preemptive'*[¶](#feeluown.task.TaskKind.preemptive)
preemptive task
`cooperative` *= 'cooperative'*[¶](#feeluown.task.TaskKind.cooperative)
cooperative task
*class* `feeluown.task.``PreemptiveTaskSpec`(*mgr*, *name*)[[source]](_modules/feeluown/task.html#PreemptiveTaskSpec)[¶](#feeluown.task.PreemptiveTaskSpec)
Preemptive task specification (threadsafe)
`bind_coro`(*coro*)[[source]](_modules/feeluown/task.html#PreemptiveTaskSpec.bind_coro)[¶](#feeluown.task.PreemptiveTaskSpec.bind_coro)
run the coroutine and bind the task
it will cancel the previous task if exists
| Returns: | `asyncio.Task` |
`bind_blocking_io`(*func*, **args*)[[source]](_modules/feeluown/task.html#PreemptiveTaskSpec.bind_blocking_io)[¶](#feeluown.task.PreemptiveTaskSpec.bind_blocking_io)
run blocking io func in a thread executor, and bind the task
it will cancel the previous task if exists
| Returns: | `asyncio.Task` |
*class* `feeluown.task.``TaskManager`(*app*)[[source]](_modules/feeluown/task.html#TaskManager)[¶](#feeluown.task.TaskManager)
named task manager
Usage:
```
async def fetch_song():
pass
task_name = 'unique-name'
task_spec = task_mgr.get_or_create(task_name, TaskType.preemptive)
task = task_spec.bind_coro(fetch_song())
```
`get_or_create`(*name*, *kind=<TaskKind.preemptive: 'preemptive'>*)[[source]](_modules/feeluown/task.html#TaskManager.get_or_create)[¶](#feeluown.task.TaskManager.get_or_create)
get task spec, it will be created if not exists
| Parameters: | * **name** – task identifier(name)
* **kind** – [`TaskKind`](#feeluown.task.TaskKind)
|
TODO: client should register first, then get by name
### GUI 相关管理模块[¶](#gui)
Note
目前,大部分 GUI 组件的接口都不稳定,我们在实现插件时,如果需要从操作 GUI 组件,
请调用以下模块的接口。如果我们想实现的功能通过以下接口在暂时实现不了,
请和 @cosven 联系。
*class* `feeluown.gui.uimodels.my_music.``MyMusicUiManager`(*app*)[[source]](_modules/feeluown/gui/uimodels/my_music.html#MyMusicUiManager)[¶](#feeluown.gui.uimodels.my_music.MyMusicUiManager)
Note
目前,我们用数组的数据结构来保存 items,只提供 add_item 和 clear 方法。
我们希望,MyMusic 中的 items 应该和 provider 保持关联。provider 是 MyMusic 的上下文。
而 Provider 是比较上层的对象,我们会提供 get_item 这种比较精细的控制方法。
#### 播放列表管理[¶](#id4)
*class* `feeluown.gui.uimodels.playlist.``PlaylistUiItem`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/gui/uimodels/playlist.html#PlaylistUiItem)[¶](#feeluown.gui.uimodels.playlist.PlaylistUiItem)
根据目前经验,播放列表的相关操作最基本的就是几个:
* 创建、删除
* 添加、移除歌曲
* 重命名
* 点击展示这个歌单
这些操作对各平台的播放列表、歌单来说,语义都是一致的,
所以 PlaylistUiItem 暂时不提供 clicked 等操作信号。
### GUI 组件[¶](#id5)
*class* `feeluown.gui.widgets.login.``LoginDialog`[[source]](_modules/feeluown/gui/widgets/login.html#LoginDialog)[¶](#feeluown.gui.widgets.login.LoginDialog)
Base class for login dialogs
`login_succeed`[¶](#feeluown.gui.widgets.login.LoginDialog.login_succeed)
login succeed signal
*class* `feeluown.gui.widgets.login.``CookiesLoginDialog`(*uri: str = None*, *required_cookies_fields=None*)[[source]](_modules/feeluown/gui/widgets/login.html#CookiesLoginDialog)[¶](#feeluown.gui.widgets.login.CookiesLoginDialog)
CookiesLoginDialog provides a text edit area and a login button.
User firstly fill in the text edit area with cookies. User can then click the login button. The clicked signal is connected to
[`login()`](#feeluown.gui.widgets.login.CookiesLoginDialog.login).
Cookies can be in text or json format. User can copy cookies from web browser like Chrome or Firefox. Cookies copied from firefox are in json format and cookies copied from Chrome are in text format.
Subclass MUST implement four methods.
* [`setup_user()`](#feeluown.gui.widgets.login.CookiesLoginDialog.setup_user)
* [`user_from_cookies()`](#feeluown.gui.widgets.login.CookiesLoginDialog.user_from_cookies)
* [`load_user_cookies()`](#feeluown.gui.widgets.login.CookiesLoginDialog.load_user_cookies)
* [`dump_user_cookies()`](#feeluown.gui.widgets.login.CookiesLoginDialog.dump_user_cookies)
One usage example: feeluown-qqmusic.
`get_cookies`()[[source]](_modules/feeluown/gui/widgets/login.html#CookiesLoginDialog.get_cookies)[¶](#feeluown.gui.widgets.login.CookiesLoginDialog.get_cookies)
Parse the content in text edit area
| Returns: | return None when the content is invalid |
| Return type: | dict or None |
`show_hint`(*text*, *color=None*)[[source]](_modules/feeluown/gui/widgets/login.html#CookiesLoginDialog.show_hint)[¶](#feeluown.gui.widgets.login.CookiesLoginDialog.show_hint)
Show hint message on dialog
| Parameters: | **color** (*string*) – red for error, orange for warning,
green for success |
`autologin`()[[source]](_modules/feeluown/gui/widgets/login.html#CookiesLoginDialog.autologin)[¶](#feeluown.gui.widgets.login.CookiesLoginDialog.autologin)
Try to load user cookies and login with it
Generally, you can call this method after dialog is shown.
`setup_user`(*user*)[[source]](_modules/feeluown/gui/widgets/login.html#CookiesLoginDialog.setup_user)[¶](#feeluown.gui.widgets.login.CookiesLoginDialog.setup_user)
Setup user session
`load_user_cookies`()[[source]](_modules/feeluown/gui/widgets/login.html#CookiesLoginDialog.load_user_cookies)[¶](#feeluown.gui.widgets.login.CookiesLoginDialog.load_user_cookies)
Load user cookies from somewhere
Load the cookies that is dumped before. If the load processing failed,
just return None.
| Returns: | cookies in dict format |
| Return type: | dict or None |
`dump_user_cookies`(*user*, *cookies*)[[source]](_modules/feeluown/gui/widgets/login.html#CookiesLoginDialog.dump_user_cookies)[¶](#feeluown.gui.widgets.login.CookiesLoginDialog.dump_user_cookies)
Dump user cookies to somewhere
Generally, you can store the cookies in FeelUOwn data directory with specifical filename.
`login`()[[source]](_modules/feeluown/gui/widgets/login.html#CookiesLoginDialog.login)[¶](#feeluown.gui.widgets.login.CookiesLoginDialog.login)
Login with cookies
Read cookies that has been filled in and create a user from it.
If succeed, the [`login_succeed`](#feeluown.gui.widgets.login.LoginDialog.login_succeed) signal will be emit.
If failed, show specific error message on the dialog based on the exception.
`user_from_cookies`(*cookies*)[[source]](_modules/feeluown/gui/widgets/login.html#CookiesLoginDialog.user_from_cookies)[¶](#feeluown.gui.widgets.login.CookiesLoginDialog.user_from_cookies)
Create a user model from cookies dict
| Return type: | [feeluown.models.UserModel](index.html#feeluown.models.UserModel) |
### 异常[¶](#module-feeluown.excs)
HELP: I do not know how to design exception classes,
as a result, these interfaces can be changed frequently.
*exception* `feeluown.excs.``FuoException`[[source]](_modules/feeluown/excs.html#FuoException)[¶](#feeluown.excs.FuoException)
*exception* `feeluown.excs.``LibraryException`[[source]](_modules/feeluown/excs.html#LibraryException)[¶](#feeluown.excs.LibraryException)
*exception* `feeluown.excs.``ProviderIOError`(*message=''*, *provider=None*)[[source]](_modules/feeluown/excs.html#ProviderIOError)[¶](#feeluown.excs.ProviderIOError)
Read/write data from/to provider failed
currently, all providers use requests to send http request,
and many Requestexception are not catched, so ProviderIOError inherit RequestException.
*exception* `feeluown.excs.``CreateReaderFailed`(*message=''*, *provider=None*)[[source]](_modules/feeluown/excs.html#CreateReaderFailed)[¶](#feeluown.excs.CreateReaderFailed)
(DEPRECATED) use ProviderIOError instead
*exception* `feeluown.excs.``ReaderException`(*message=''*, *provider=None*)[[source]](_modules/feeluown/excs.html#ReaderException)[¶](#feeluown.excs.ReaderException)
(DEPRECATED) use ProviderIOError instead
*exception* `feeluown.excs.``ReadFailed`(*message=''*, *provider=None*)[[source]](_modules/feeluown/excs.html#ReadFailed)[¶](#feeluown.excs.ReadFailed)
(DEPRECATED) use ProviderIOError instead
*exception* `feeluown.excs.``ResourceNotFound`[[source]](_modules/feeluown/excs.html#ResourceNotFound)[¶](#feeluown.excs.ResourceNotFound)
*exception* `feeluown.excs.``ProviderAlreadyRegistered`[[source]](_modules/feeluown/excs.html#ProviderAlreadyRegistered)[¶](#feeluown.excs.ProviderAlreadyRegistered)
*exception* `feeluown.excs.``ProviderNotFound`[[source]](_modules/feeluown/excs.html#ProviderNotFound)[¶](#feeluown.excs.ProviderNotFound)
*exception* `feeluown.excs.``ModelNotFound`[[source]](_modules/feeluown/excs.html#ModelNotFound)[¶](#feeluown.excs.ModelNotFound)
Model is not found
For example, a model identifier is invalid.
New in version 3.7.7.
*exception* `feeluown.excs.``NotSupported`[[source]](_modules/feeluown/excs.html#NotSupported)[¶](#feeluown.excs.NotSupported)
Provider does not support the operation
*exception* `feeluown.excs.``MediaNotFound`[[source]](_modules/feeluown/excs.html#MediaNotFound)[¶](#feeluown.excs.MediaNotFound)
*exception* `feeluown.excs.``NoUserLoggedIn`[[source]](_modules/feeluown/excs.html#NoUserLoggedIn)[¶](#feeluown.excs.NoUserLoggedIn)
(DEPRECATED) return None when there is no user logged in
(Deprecated) 媒体资源管理 v1[¶](#deprecated-v1)
---
feeluown 一个设计目标是让用户能够合理整合并高效使用自己在各个音乐平台能获取的资源。
而每个平台提供资源数据的方式都有差异。有的可能已经公开的 RESTful API,它可以获取到资源的元信息,
并且也有接口可以获取到资源的链接;而有的平台则没有公开的可用接口,但是通过一些技术手段(如爬虫),
也可以获取到平台的资源。另外,每个平台的资源模型也有差异。有的平台会用一个非常大的结构体来表示一首歌曲;
而有的平台会有很多个小结构体,来拼凑出一首歌曲的全部信息。这些平台的差异给 feeluown 的架构设计带来了一些挑战,
feeluown 通过“媒体资源管理”子系统来解决这些困难。
音乐库是媒体资源管理子系统的入口。音乐库部分负责管理 feeluown 的音乐资源,包括歌曲、歌手、专辑详情获取,
专辑、歌单封面获取及缓存(这是设计目标,部分逻辑目前未实现)。它主要由几个部分组成:
音乐对象模型(*Model*)、音乐提供方(*Provider*)、提供方管理(*Library*)。
```
+---+
| +---+ |
| | Library | |
| +---+ +---+ |
| | | Models | |
| | +---+ | Song | |
| |--| provider(netease) | -| Artist |--- |
| | +---+ | Album | | +---+ |
| | | ... | | | Model Spec | |
| | +---+ | duck typing | (Base Model) | |
| | |---| | |
| | +---+ | | BaseSong | |
| | +---+ | Models | | | BaseArtist | |
| |--| provider(xiami) |-| Song |--- | ... | |
| | +---+ | ... | +---+ |
| | +---+ |
| | |
| |--... |
| |
+---+
```
### 音乐库[¶](#library)
音乐库模块管理资源提供方(*Provider*)。
```
# 注册一个资源提供方 library.register(provider)
# 获取资源提供方实例 provider = library.get(provider.identifier)
# 列出所有资源提供方 library.list()
# 在音乐库中搜索关键词 library.search('linkin park')
```
### 资源提供方[¶](#id2)
歌曲等音乐资源都来自于某一个提供方。比如,我们认为本地音乐的提供方是本地,
网易云音乐资源的提供方是网易,等等。对应到程序设计上,每个提供方都对应一个 provider 实例。
provider 是我们访问具体一个音乐平台资源音乐的入口。
在 feeluown 生态中,每个音乐资源提供方都对应着一个插件,我们现在有 feeluown-local/feeluown-netease 等许多插件,这些插件在启动时,会注册一个 provider 实例到 feeluown 的音乐库模块上。
注册完成之后,音乐库和 feeluown 其它模块就能访问到这个提供方的资源
举个栗子,feeluown-local 插件在启动时就创建了一个 *identifier* 为 `local` 的 provider 实例,
并将它注册到音乐库中,这样,当我们访问音乐库资源时,就能访问到本地音乐资源。
这个过程抽象为代码的话,它就类似:
```
result = library.serach()
# we will see nothing in result because library has no provider
from fuo_local import provider library.register(provider)
result = library.search('keyword')
# we may see that some local songs are in the search result
```
每个 provider 实例,它都需要提供访问具体资源的入口。举个栗子,
```
from fuo_local import provider
# we can get a song instance by a song identifier song = provider.Song.get(song_id)
# we can also get a artist instance by a artist identifier artist = provider.Artist.get(artist_id)
```
下面是音乐资源提供方的抽象基类,我们推荐大家基于此来实现一个 Provider 类。
当我们访问一个音乐提供方的资源时,不同的用户对不同资源的权限也是不一样的。
如果我们需要以特定的身份来访问音乐资源,我们可以使用 provider 的 `auth` 和 `auth_as` 方法:
```
from fuo_netease import provider user_a = obj # UserModel provider.auth(user_a)
# 使用 auth_as 来临时切换用户身份 with provider.auth_as(user_b):
provider.Song.get(song_id)
```
### 资源模型[¶](#id3)
在 feeluown 中,我们为各种音乐资源定义了各自的模型标准,
每个资源提供方的创建的资源实例都应该遵守这个标准。
#### 模型类型[¶](#id4)
我们预定义的音乐资源相关的模型有 6 种:歌曲,歌手,专辑,歌单,歌词,用户。
*class* `feeluown.models.``ModelType`[[source]](_modules/feeluown/models/base.html#ModelType)[¶](#feeluown.models.ModelType)
An enumeration.
`dummy` *= 0*[¶](#feeluown.models.ModelType.dummy)
`song` *= 1*[¶](#feeluown.models.ModelType.song)
`artist` *= 2*[¶](#feeluown.models.ModelType.artist)
`album` *= 3*[¶](#feeluown.models.ModelType.album)
`playlist` *= 4*[¶](#feeluown.models.ModelType.playlist)
`lyric` *= 5*[¶](#feeluown.models.ModelType.lyric)
`video` *= 6*[¶](#feeluown.models.ModelType.video)
`user` *= 17*[¶](#feeluown.models.ModelType.user)
`comment` *= 18*[¶](#feeluown.models.ModelType.comment)
`none` *= 128*[¶](#feeluown.models.ModelType.none)
#### 模型基类及其元信息[¶](#id5)
这几种模型定义都继承于一个模型基类 `BaseModel` 。
而每个模型都会有自己的的元信息,比如:这个模型是什么类型?有哪些字段?
有哪些方法?这些元信息都会记录在模型的 inner class `Meta` 中。
*class* `feeluown.models.``BaseModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#BaseModel)[¶](#feeluown.models.BaseModel)
Base model for music resource
*class* `Meta`[¶](#feeluown.models.BaseModel.Meta)
模型元信息。模型的实例可以通过 `meta` 属性来访问模型元信息。
此 Meta 类的设计借鉴于 [Django Model Meta Options](https://docs.djangoproject.com/en/dev/ref/models/options/) 。
`model_type`[¶](#feeluown.models.BaseModel.Meta.model_type)
模型类型,默认为 ModelType.dummy
`fields`[¶](#feeluown.models.BaseModel.Meta.fields)
模型的字段,所有模型都必须有一个 identifier 字段
除了类型和字段这两个基本的元信息之外,模型还会有一些其它的元信息也会被记录在 Meta 类中,不同类型的模型,元信息可能也会不同,后面我们会陆续介绍。
#### 一个模型示例[¶](#id6)
我们以 **歌曲模型** 为例,来看看一个真正的模型通常都由哪些部分组成:
```
# 继承 BaseModel class SongModel(BaseModel):
# 定义歌曲模型的元信息
class Meta:
# 类型为 ModelType.song
model_type = ModelType.song
# 定义模型字段
fields = ['album', 'artists', 'lyric', 'comments', 'title', 'url',
'duration', 'mv']
# 定义模型展示字段
fields_display = ['title', 'artists_name', 'album_name', 'duration_ms']
# 除了上述定义的模型字段外,歌曲模型实例也总是会有下面三个 property
@property
def artists_name(self):
return _get_artists_name(self.artists or [])
@property
def album_name(self):
return self.album.name if self.album is not None else ''
@property
def duration_ms(self):
if self.duration is not None:
seconds = self.duration / 1000
m, s = seconds / 60, seconds % 60
return '{:02}:{:02}'.format(int(m), int(s))
```
这个模型有几个方面的意义:
1. 它定义了该模型的类型(*model_type*) 为 `ModelType.song`
2. 它定义了一首歌曲应该有哪些字段(*fields*)。在这个例子中,也就是:
`album`, `artists`, `lyric`, `comments`, `title`, `url` ,
`duration`, `mv` 8 个字段。
另外,它还有 `artists_name`, `album_name`, `duration_ms` 这 3 个属性。
其它模块在使用 model 实例时,总是可以访问这 8 个字段以及 3 个属性。
举个例子,在程序的其它模块中,当我们遇到 song 对象时,我们可以确定,
这个对象一定 **会有 title 属性** 。这也要求资源提供方在实现它们的资源模型时,
要严格按照规范来进行。
3. 它定义了一首歌曲的展示字段(*fields_display*)。我们在后面会详细介绍它。
#### 访问模型字段[¶](#visit-model-fields)
模型定义了一个模型实例应该具有哪些字段,但访问实例字段的时候,
我们需要注意两个问题:
1. **字段的值不一定是可用的** 。比如一首歌可能没有歌词,
也没有评论,这时,我们访问这首歌 `song.lyric` 时,得到的就是一个空字符串,
访问 `song.comments` 属性时,得到的是一个空列表,但不会是 None 。
2. **第一次获取字段的值的时候可能会产生网络请求** 。以歌曲为例,
我们只要 identifier 就可以实例化一个歌曲模型,这时它的 url/lyric 等字段值都没有,
当我们第一次获取歌曲 `url` , `lyric` , `comments` 等属性时,
model 可能会触发一个网络请求,从资源提供方的服务端来获取资源的信息。
**也意味着简单的属性访问可能会触发网络或者文件 IO** 。
Note
访问模型字段和访问一个 python 对象属性不同,它里面有非常多的黑魔法。
这些黑魔法对我们来说有利有弊,这样设计的缘由可以参考 [怎样较好的抽象不同的资源提供方?](index.html#research-model) 。
黑魔法:我们重写了 BaseModel 的 `__getattribute__` 方法,
当我们访问实例的一个字段时,如果这个字段值为 None (没有初始化的字段的值都是 None),
实例会调用自己模型的 `get` 类方法来初始化自己,我们认为 get 方法可以初始化所有字段
(前面我们也提到 get方法应该尽可能初始化所有字段,get 方法往往是一次 IO 操作,比如网络请求),
调用 get 方法后,从而进入下一 *生命阶段* gotten。
这样,调用方在访问这个字段时,就总是能得到一个初始化后的值,
模型实例生命阶段更多细节可以参考 [实例生命周期](#model-stage) 。
*class* `feeluown.models.``BaseModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#BaseModel)
Base model for music resource
`__getattribute__`(*name*)[¶](#feeluown.models.BaseModel.__getattribute__)
获取 model 某一属性时,如果该属性值为 None 且该属性是 field 且该属性允许触发 get 方法,这时,我们尝试通过获取 model 详情来初始化这个字段,于此同时,还会重新给部分 fields 重新赋值。
#### 模型的展示字段[¶](#id8)
对于一首歌曲而言,我们认为 `歌曲标题+歌手名+专辑名+时长` 可以较好的代表一首歌曲,
因为用户(人类)看到这四个字段,往往就能大概率的确定这是哪首歌。如果只有歌曲标题,
我们则不那么确定,因为一首歌可能被许多歌手唱过;只有标题和歌手名,我们同样不能确定。
不像计算机,或者说软件,它会使用 identifier 字段来标识区分一个资源,而用户(人类)
往往会通过一些 **可读的** 特征字段来标识一个资源,软件在展示一个资源的时候,
往往主要也会展示这些 **特征** 字段 ,我们将这些字段称之为 *展示字段* 。
对于一个模型来说,展示字段会有一个展示值,我们可以在字段后加上 `_display`
来访问展示值。比如访问歌曲标题的展示值: `song.title_display` ,
歌曲的歌手名称的展示值: `song.artists_name_display` 。
展示字段的展示值有一些特点:
1. 展示值的类型总是字符串 2. 访问展示值总是安全的,不会触发网络请求。当值为空时,返回空字符串。
3. 展示值和对应字段真正的值可能不一样(从第 2 点可以推断出来)
#### 分页读[¶](#id9)
服务端提供大数据集时往往会采用分页技术。对于服务端(API),接口一般有两种设计:
1. offset + limit 2. page + pageSize
这两种设计没有根本区别,只是编程的时候会不同,一般来说,大家认为 offset + limit 更直观。
而对于前端或者说客户端,UX(User Experience) 一般有两种设计:
1. 流式分页(常见于信息流,比如知乎的个人关注页)
2. 电梯式分页(常见于搜索结果,比如百度搜索的结果页)
这两种设计各有优劣,在 UI 上,feeluown 目前也是使用流式分页。比如一个歌单有上千首,
则用户需要一直往下拉。在接口层面,feeluown 模块提供了 SequentialReader 来帮助实现流式分页。
*class* `feeluown.utils.reader.``SequentialReader`(*g*, *count*, *offset=0*)[[source]](_modules/feeluown/utils/reader.html#SequentialReader)[¶](#feeluown.utils.reader.SequentialReader)
Help you sequential read data
We only want to launch web request when we need the resource Formerly, we use Python generator to achieve this lazy read feature. However, we can’t extract any read meta info,
such as total count and current offset, from the ordinary generator.
SequentialReader implements the iterator protocol, wraps the generator and store the reader state.
Note
iterating may be a blocking operation.
**Usage example**:
```
>>> def fetch_songs(page=1, page_size=50):
... return list(range(page * page_size,
... (page + 1) * page_size))
...
>>> def create_songs_g():
... page = 0
... total_page = 2
... page_size = 2
...
... def g():
... nonlocal page, page_size
... while page < total_page:
... for song in fetch_songs(page, page_size):
... yield song
... page += 1
...
... total = total_page * page_size
... return SequentialReader(g(), total)
...
>>> g = create_songs_g()
>>> g.offset, g.count
(0, 4)
>>> next(g), next(g)
(0, 1)
>>> list(g)
[2, 3]
>>> g.offset, g.count
(4, 4)
```
New in version 3.1.
流式分页存在一个问题,必须按照顺序来获取数据。而有些场景,我们希望根据 index 来获取数据。
举个例子,假设一个播放列表有 3000 首歌曲,在随机播放模式下,系统需要随机选择了 index 为 2500 的歌曲,这时候,我们不能去把 index<2500 的歌曲全部拉取下来。
feeluown 提供了 RandomReader 类来实现这个功能
*class* `feeluown.utils.reader.``RandomReader`(*count*, *read_func*, *max_per_read*)[[source]](_modules/feeluown/utils/reader.html#RandomReader)[¶](#feeluown.utils.reader.RandomReader)
`__init__`(*count*, *read_func*, *max_per_read*)[[source]](_modules/feeluown/utils/reader.html#RandomReader.__init__)[¶](#feeluown.utils.reader.RandomReader.__init__)
random reader constructor
| Parameters: | * **count** (*int*) – total number of objects
* **read_func** (*function*) – func(start: int, end: int) -> list
* **max_per_read** (*int*) – max count per read, it must big than 0
|
`read`(*index*)[[source]](_modules/feeluown/utils/reader.html#RandomReader.read)[¶](#feeluown.utils.reader.RandomReader.read)
read object by index
if the object is not already read, this method may trigger IO operation.
| Raises: | [**ReadFailed**](index.html#feeluown.excs.ReadFailed) – when the IO operation fails |
`readall`()[[source]](_modules/feeluown/utils/reader.html#RandomReader.readall)[¶](#feeluown.utils.reader.RandomReader.readall)
read all objects
| Return list: | list of objects |
| Raises: | [**ReadFailed**](index.html#feeluown.excs.ReadFailed) – |
*class* `feeluown.utils.reader.``RandomSequentialReader`(*count*, *read_func*, *max_per_read=100*)[[source]](_modules/feeluown/utils/reader.html#RandomSequentialReader)[¶](#feeluown.utils.reader.RandomSequentialReader)
random reader which support sequential read
#### 模型实例化[¶](#id10)
模型实例化有三种常见方法,它们分别适用于不同场景。
第一种:当我们知道资源的准确信息时,可以通过构造函数来创建一个实例:
```
song = SongModel(identifier=123,
title='Love Story',
url='http://xxx.mp3',
duration=1000.12)
```
资源提供方通常会使用这种方法来创建一个实例,因为资源提供方拥有准确且全面的信息。
第二种:我们也可以通过展示字段和 `identifier` 来创建一个资源实例
*class* `feeluown.models.``BaseModel`[[source]](_modules/feeluown/models/models.html#BaseModel)
Base model for music resource
*classmethod* `create_by_display`(*identifier*, ***kwargs*)[¶](#feeluown.models.BaseModel.create_by_display)
create model instance with identifier and display fields
以上面的歌曲模型为例,我们可以这样来创建一个歌曲模型实例:
```
identifier = 1 title = 'in the end'
artists_name = 'lp' # linkin park album_name = 'unknown'
# 如果不知道 duration,创建时可以忽略 song = SongModel.create_by_display(identifier,
title=title,
artists_name=artists_name,
album_name=album)
assert song.title_display == 'in the end'
```
这时,我们并不需要知道歌曲特别准确的信息,只需要保证 identifier 准确皆可,
类似歌曲标题,我们可以不用在乎大小写;也不需要在意歌手名的全名。接着,
我们可以直接访问歌曲真实的标题:
```
print(song.title) # 可能会触发网络请求
```
**在获取到真实的标题之后,我们再次访问标题的展示值时,展示值会变成和真实值一样。**
第三种:通过 `Model.get` 方法来创建(获取)一个资源实例。
资源提供方在实现自己的资源模型时,可以在模型的元信息种声明自己是否支持 get 方法,
如果支持,则实现 get 方法,get 方法返回的资源实例的字段应该 **尽可能**
全部初始化,访问它的任何字段都应该 **尽可能** 不触发网络请求。
*class* `feeluown.models.``BaseModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#BaseModel)
Base model for music resource
*class* `Meta`
`allow_get`[¶](#feeluown.models.BaseModel.Meta.allow_get)
是否可以通过 get 来获取一个实例
*classmethod* `get`(*identifier*)[[source]](_modules/feeluown/models/models.html#BaseModel.get)[¶](#feeluown.models.BaseModel.get)
get model instance by identifier
资源方应该尽可能实现 get 方法。
#### 实例生命周期[¶](#model-stage)
根据模型字段初始化的状态,我们定义了实例的生命周期,
上述三种实例化方法创建的实例处于生命周期的不同阶段(*stage*)。
一个实例的生命周期由三个阶段组成: display, inited, gotten。
通过构造函数创建的实例所处的阶段是 inited,
通过 create_by_display 方法创建的实例所处阶段是 dispaly,
通过 get 方法获取的实例处于 gotten 阶段。
```
+---+
| dispaly | ---
+---+ \ +---+
---> | gotten |
+---+ / +---+
| inited | ---
+---+
```
当实例处于 display 阶段时,它所有的字段可能都没有初始化,
当实例处于 inited 阶段时,它的某些字段可能还没有被初始化,
一般情况下,没初始化的字段的值是 None。
当我们访问模型实例字段的时候,实例可能会进行状态切换,详情可以参考 [访问模型字段](#visit-model-fields) 。
#### 预定义的模型[¶](#id12)
*class* `feeluown.models.``BaseModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#BaseModel)
Base model for music resource
*class* `Meta`
`model_type` *= ModelType.dummy*
`allow_get` *= True*
Model should implement get method as far as possible
`allow_batch` *= False*[¶](#feeluown.models.BaseModel.Meta.allow_batch)
`fields` *=*
| name | type | desc |
| --- | --- | --- |
| identifier | `str` | model instance identifier |
*class* `feeluown.models.``SongModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#SongModel)[¶](#feeluown.models.SongModel)
*class* `Meta`[¶](#feeluown.models.SongModel.Meta)
`model_type` *= ModelType.song*[¶](#feeluown.models.SongModel.Meta.model_type)
`fields` *=*[¶](#feeluown.models.SongModel.Meta.fields)
| name | type | desc |
| --- | --- | --- |
| album | [`AlbumModel`](#feeluown.models.AlbumModel) | |
| artists | [`ArtistModel`](#feeluown.models.ArtistModel) | |
| comments | NOT DEFINED | |
| duration | `float` | song duration (unit: mileseconds) |
| lyric | [`LyricModel`](#feeluown.models.LyricModel) | |
| mv | [`MvModel`](#feeluown.models.MvModel) | |
| title | `str` | title |
| url | `str` | song url (http url or local filepath) |
`fields_display` *= [title, artists_name, album_name, duration_ms]*[¶](#feeluown.models.SongModel.Meta.fields_display)
`artists_name`[¶](#feeluown.models.SongModel.artists_name)
`album_name`[¶](#feeluown.models.SongModel.album_name)
`duration_ms`[¶](#feeluown.models.SongModel.duration_ms)
`filename`[¶](#feeluown.models.SongModel.filename)
*class* `feeluown.models.``ArtistModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#ArtistModel)[¶](#feeluown.models.ArtistModel)
Artist Model
*class* `Meta`[¶](#feeluown.models.ArtistModel.Meta)
`model_type` *= ModelType.artist*[¶](#feeluown.models.ArtistModel.Meta.model_type)
`fields` *=*[¶](#feeluown.models.ArtistModel.Meta.fields)
| name | type | desc |
| --- | --- | --- |
| albums | `list` | list of [`AlbumModel`](#feeluown.models.AlbumModel) |
| cover | `str` | |
| desc | `str` | |
| name | `str` | |
| songs | `list` | list of [`SongModel`](#feeluown.models.SongModel) |
`allow_create_songs_g` *= False*[¶](#feeluown.models.ArtistModel.Meta.allow_create_songs_g)
是否允许创建歌曲生成器,如果为 True,我们可以调用 `create_song_g`
方法来创建一个歌曲生成器。
一个歌手可能会有上千首歌曲,这种情况下,一次性返回所有歌曲显然不是很合适。
这时,资源提供方应该考虑实现这个接口,调用方也应该考虑使用。
`fields_display` *= [name]*[¶](#feeluown.models.ArtistModel.Meta.fields_display)
NOT IMPLEMENTED
`create_songs_g`()[[source]](_modules/feeluown/models/models.html#ArtistModel.create_songs_g)[¶](#feeluown.models.ArtistModel.create_songs_g)
create songs generator(alpha)
*class* `feeluown.models.``AlbumType`[[source]](_modules/feeluown/models/base.html#AlbumType)[¶](#feeluown.models.AlbumType)
Album type enumeration
中文解释:
```
Single 和 EP 会有一些交集,在展示时,会在一起展示,比如 Singles & EPs。
Compilation 和 Retrospective 也会有交集,展示时,也通常放在一起,统称“合辑”。
```
References:
1. <https://www.zhihu.com/question/22888388/answer/33255107>
2. <https://zh.wikipedia.org/wiki/%E5%90%88%E8%BC%AF`standard` *= 'standard'*[¶](#feeluown.models.AlbumType.standard)
`single` *= 'single'*[¶](#feeluown.models.AlbumType.single)
`ep` *= 'EP'*[¶](#feeluown.models.AlbumType.ep)
`live` *= 'live'*[¶](#feeluown.models.AlbumType.live)
`compilation` *= 'compilation'*[¶](#feeluown.models.AlbumType.compilation)
`retrospective` *= 'retrospective'*[¶](#feeluown.models.AlbumType.retrospective)
`guess_by_name` *= <bound method AlbumType.guess_by_name of <enum 'AlbumType'>>*[[source]](_modules/feeluown/models/base.html#AlbumType.guess_by_name)[¶](#feeluown.models.AlbumType.guess_by_name)
*class* `feeluown.models.``AlbumModel`(**args*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#AlbumModel)[¶](#feeluown.models.AlbumModel)
*class* `Meta`[¶](#feeluown.models.AlbumModel.Meta)
`model_type` *= ModelType.album*[¶](#feeluown.models.AlbumModel.Meta.model_type)
`fields` *=*[¶](#feeluown.models.AlbumModel.Meta.fields)
| name | type | desc |
| --- | --- | --- |
| artists | `list` | list of [`ArtistModel`](#feeluown.models.ArtistModel) |
| cover | `str` | |
| desc | `str` | |
| name | `str` | |
| songs | `list` | list of [`SongModel`](#feeluown.models.SongModel) |
| type | [`AlbumType`](#feeluown.models.AlbumType) | |
`artists_name`[¶](#feeluown.models.AlbumModel.artists_name)
*class* `feeluown.models.``LyricModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#LyricModel)[¶](#feeluown.models.LyricModel)
Lyric Model
| Parameters: | * **song** ([*SongModel*](index.html#feeluown.models.SongModel)) – song which lyric belongs to
* **content** (*str*) – lyric content
* **trans_content** (*str*) – translated lyric content
|
*class* `Meta`[¶](#feeluown.models.LyricModel.Meta)
`model_type` *= ModelType.lyric*[¶](#feeluown.models.LyricModel.Meta.model_type)
`fields` *=*[¶](#feeluown.models.LyricModel.Meta.fields)
| name | type | desc |
| --- | --- | --- |
| content | `str` | lyric text |
| trans_content | `str` | translated lyric text |
| song | [`SongModel`](#feeluown.models.SongModel) | the related song |
*class* `feeluown.models.``MvModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#MvModel)[¶](#feeluown.models.MvModel)
*class* `Meta`[¶](#feeluown.models.MvModel.Meta)
`fields` *=*[¶](#feeluown.models.MvModel.Meta.fields)
| name | type | desc |
| --- | --- | --- |
| artist | [`ArtistModel`](#feeluown.models.ArtistModel) | |
| cover | `str` | |
| desc | `str` | |
| media | `Media` | |
| name | `str` | |
*class* `feeluown.models.``PlaylistModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#PlaylistModel)[¶](#feeluown.models.PlaylistModel)
*class* `Meta`[¶](#feeluown.models.PlaylistModel.Meta)
`model_type` *= ModelType.playlist*[¶](#feeluown.models.PlaylistModel.Meta.model_type)
`fields` *=*[¶](#feeluown.models.PlaylistModel.Meta.fields)
| name | type | desc |
| --- | --- | --- |
| cover | `str` | playlist cover url |
| desc | `str` | |
| name | `str` | |
| songs | `list` | |
`allow_create_songs_g` *= False*[¶](#feeluown.models.PlaylistModel.Meta.allow_create_songs_g)
`add`(*song_id*)[[source]](_modules/feeluown/models/models.html#PlaylistModel.add)[¶](#feeluown.models.PlaylistModel.add)
add song to playlist, return true if succeed.
If the song was in playlist already, return true.
`remove`(*song_id*)[[source]](_modules/feeluown/models/models.html#PlaylistModel.remove)[¶](#feeluown.models.PlaylistModel.remove)
remove songs from playlist, return true if succeed
If song is not in playlist, return true.
*class* `feeluown.models.``SearchModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#SearchModel)[¶](#feeluown.models.SearchModel)
Search Model
TODO: support album and artist
*class* `Meta`[¶](#feeluown.models.SearchModel.Meta)
`fields` *=*[¶](#feeluown.models.SearchModel.Meta.fields)
| name | type | desc |
| --- | --- | --- |
| q | `str` | search query string |
| songs | `list` | |
*class* `feeluown.models.``UserModel`(*obj=None*, ***kwargs*)[[source]](_modules/feeluown/models/models.html#UserModel)[¶](#feeluown.models.UserModel)
User Model
| Parameters: | * **name** – user name
* **playlists** – playlists created by user
* **fav_playlists** – playlists collected by user
* **fav_songs** – songs collected by user
* **fav_albums** – albums collected by user
* **fav_artists** – artists collected by user
|
*class* `Meta`[¶](#feeluown.models.UserModel.Meta)
`model_type` *= ModelType.user*[¶](#feeluown.models.UserModel.Meta.model_type)
`fields` *=*[¶](#feeluown.models.UserModel.Meta.fields)
| name | type | desc |
| --- | --- | --- |
| name | `str` | |
| fav_albums | `list` | list of [`AlbumModel`](#feeluown.models.AlbumModel) |
| fav_artists | `list` | |
| fav_playlists | `list` | playlists collected by user |
| fav_songs | `list` | |
| playlists | `list` | playlists created by user |
`allow_fav_songs_add` *= False*[¶](#feeluown.models.UserModel.Meta.allow_fav_songs_add)
`allow_fav_songs_remove` *= False*[¶](#feeluown.models.UserModel.Meta.allow_fav_songs_remove)
`allow_fav_playlists_add` *= False*[¶](#feeluown.models.UserModel.Meta.allow_fav_playlists_add)
`allow_fav_playlists_remove` *= False*[¶](#feeluown.models.UserModel.Meta.allow_fav_playlists_remove)
`allow_fav_albums_add` *= False*[¶](#feeluown.models.UserModel.Meta.allow_fav_albums_add)
`allow_fav_albums_remove` *= False*[¶](#feeluown.models.UserModel.Meta.allow_fav_albums_remove)
`allow_fav_artists_add` *= False*[¶](#feeluown.models.UserModel.Meta.allow_fav_artists_add)
`allow_fav_artists_remove` *= False*[¶](#feeluown.models.UserModel.Meta.allow_fav_artists_remove)
`add_to_fav_songs`(*song_id*)[[source]](_modules/feeluown/models/models.html#UserModel.add_to_fav_songs)[¶](#feeluown.models.UserModel.add_to_fav_songs)
add song to favorite songs, return True if success
| Parameters: | **song_id** – song identifier |
| Returns: | Ture if success else False |
| Return type: | boolean |
`remove_from_fav_songs`(*song_id*)[[source]](_modules/feeluown/models/models.html#UserModel.remove_from_fav_songs)[¶](#feeluown.models.UserModel.remove_from_fav_songs)
`add_to_fav_playlists`(*playlist_id*)[[source]](_modules/feeluown/models/models.html#UserModel.add_to_fav_playlists)[¶](#feeluown.models.UserModel.add_to_fav_playlists)
`remove_from_fav_playlists`(*playlist_id*)[[source]](_modules/feeluown/models/models.html#UserModel.remove_from_fav_playlists)[¶](#feeluown.models.UserModel.remove_from_fav_playlists)
`add_to_fav_albums`(*album_id*)[[source]](_modules/feeluown/models/models.html#UserModel.add_to_fav_albums)[¶](#feeluown.models.UserModel.add_to_fav_albums)
`remove_from_fav_albums`(*album_id*)[[source]](_modules/feeluown/models/models.html#UserModel.remove_from_fav_albums)[¶](#feeluown.models.UserModel.remove_from_fav_albums)
`add_to_fav_artists`(*aritst_id*)[[source]](_modules/feeluown/models/models.html#UserModel.add_to_fav_artists)[¶](#feeluown.models.UserModel.add_to_fav_artists)
`remove_from_fav_artists`(*artist_id*)[[source]](_modules/feeluown/models/models.html#UserModel.remove_from_fav_artists)[¶](#feeluown.models.UserModel.remove_from_fav_artists)
### 资源文件[¶](#media)
在 feeluown 中, [media](index.html#term-media) 代表媒体实体资源,我们称 media 为资源文件。
上面我们有讲到歌曲和 MV 等资源模型,一个资源模型可以对应多个资源文件,
比如一首歌曲可以有多个音频文件、或者多个链接,这些音频文件的质量可能不一样(高中低),
文件格式可能也不一样(mp3,flac,wav)等。
在 feeluown 中,我们定义了三种媒体资源类型:音频,视频,图片。
*class* `feeluown.media.``MediaType`[[source]](_modules/feeluown/media.html#MediaType)[¶](#feeluown.media.MediaType)
`audio` *= 'audio'*[¶](#feeluown.media.MediaType.audio)
`video` *= 'video'*[¶](#feeluown.media.MediaType.video)
`image` *= 'image'*[¶](#feeluown.media.MediaType.image)
每个资源都有特定的质量,对于音频,我们一般根据比特率来判断;对于视频,
我们根据分辨率来判断;对于图片,我们目前还没有设定标准。
在 feeluown 中,我们 *约定* **比特率** 为 320kbps 的音频文件质量为 `hq` (high quality),
大于 320kbps 的为 `shq` (super high quality),一般是无损音乐,200kbps 左右的音频为 `sq` (standard quality), 比特率小于 200kbps 的音频质量为 `lq` (low quality)。
*class* `feeluown.media.Quality.``Audio`[¶](#feeluown.media.Quality.Audio)
An enumeration.
`hq` *= 'hq'*[¶](#feeluown.media.Quality.Audio.hq)
`lq` *= 'lq'*[¶](#feeluown.media.Quality.Audio.lq)
`shq` *= 'shq'*[¶](#feeluown.media.Quality.Audio.shq)
`sq` *= 'sq'*[¶](#feeluown.media.Quality.Audio.sq)
对于视频,根据视频分辨率来定义文件质量。规则如下:
| 分辨率 | 品质 |
| --- | --- |
| 4k | `fhd` (full high definition) |
| 720p ~ 1080p | `hd` (high definition) |
| 480p | `sd` (standard definition) |
| <480p | `ld` (low definition) |
*class* `feeluown.media.Quality.``Video`[¶](#feeluown.media.Quality.Video)
An enumeration.
`fhd` *= 'fhd'*[¶](#feeluown.media.Quality.Video.fhd)
`hd` *= 'hd'*[¶](#feeluown.media.Quality.Video.hd)
`ld` *= 'ld'*[¶](#feeluown.media.Quality.Video.ld)
`sd` *= 'sd'*[¶](#feeluown.media.Quality.Video.sd)
当资源提供方提供的资源有多种质量时,比如一首歌有多个播放链接,我们可以让 SongModel 继承 `MultiQualityMixin` 类,并实现 `list_quality`
和 `get_media` 两个方法:
*class* `feeluown.media.``MultiQualityMixin`[[source]](_modules/feeluown/media.html#MultiQualityMixin)[¶](#feeluown.media.MultiQualityMixin)
`list_quality`()[[source]](_modules/feeluown/media.html#MultiQualityMixin.list_quality)[¶](#feeluown.media.MultiQualityMixin.list_quality)
list available quality
`select_media`(*policy=None*)[[source]](_modules/feeluown/media.html#MultiQualityMixin.select_media)[¶](#feeluown.media.MultiQualityMixin.select_media)
select a media by quality(and fallback policy)
| Parameters: | **policy** – fallback priority/policy |
`get_media`(*quality*)[[source]](_modules/feeluown/media.html#MultiQualityMixin.get_media)[¶](#feeluown.media.MultiQualityMixin.get_media)
get media by quality
if q is not available, return None.
`select_media` 方法的参数为 policy,policy 是一个符合一定规则的字符串,
由 `SortPolicy` 类负责解析。
```
>>> policy = '>>>'
>>> media = song.select_media(policy)
>>> if media is None:
>>> player.stop()
>>> else:
>>> player.play(media)
```
SortPolicy 类定义了 6 中规则,见如下的 rules 变量文档。
*class* `feeluown.media.Quality.``SortPolicy`[¶](#feeluown.media.Quality.SortPolicy)
media sort policy
For example, when the quality list is: `[hp, h, s, l]`,
then policy will be interpreted like this:
```
h<<> = h -> hp -> s -> l h>>> = h -> s -> l -> hp h>< = h -> s -> hp -> l h<> = h -> hp -> s -> l
>>> = hp -> h -> s -> l # doctest: +SKIP
<<< = l -> s -> h -> hp
```
Code Example:
```
policy = 'hq<>' # priority: hq shq sq lq song.select_media(policy)
song.select_media('>>>') # shq hq sq lq
video_policy = 'sd<>' # priority: sd hd ld fhd video.select_media(video_policy)
```
`rules`[¶](#feeluown.media.Quality.SortPolicy.rules)
policy 字符串规则。这里 `rlrl` 的意思是 right left right left,
它对应的规则是 `r'(\w+)><'` 规则中 `\w` 匹配的是质量字符串, `>`
代表向右 `right` , `<` 代表向左 `left` 。
举个例子,对于策略 `hq><` ,我们可以这样理解:我们有一个从高到低的排好序的列表
`[shq, hq, sq, lq]` , 以 hq 为中心,先向右看,为 sq,再向左看一位,
为 shq, 重复向右和向左看的逻辑,就可以得到这样一个优先级: `hq -> sq -> shq -> lq`
```
rules = (
('rlrl', r'(\w+)><'),
('lrlr', r'(\w+)<>'),
('llr', r'(\w+)<<>'),
('rrl', r'(\w+)>><'),
('rrr', r'(\w+)?>>>'),
('lll', r'(\w+)?<<<'),
)
```
media 对象中包含了资源文件的元信息,对于音频文件,有 bitrate, format
(以后会根据需要添加新属性,比如 size),这个元信息保存在 `media.metadata` 中,
metadata 是 `AudioMeta` 的实例。对于视频文件,metadata 则是 `VideoMeta`
(暂时未实现) 的实例。
*class* `feeluown.media.``AudioProps`(*bitrate=None*, *format=None*)[[source]](_modules/feeluown/media.html#AudioProps)[¶](#feeluown.media.AudioProps)
### 使用示例[¶](#media-assets-management-usage)
媒体资源管理 v2[¶](#v2)
---
媒体资源管理对应的代码模块主要是 feeluown.library 包。
### 经典话题[¶](#id1)
#### 音乐多音质[¶](#id2)
fuo 协议[¶](#fuo)
---
**fuo 协议** 是 feeluown 播放器的控制协议,它主要是用来提升 feeluown 的可扩展性。
fuo 协议在设计时优先考虑以下几点:
1. 可读性好 2. 请求简单(使用 netcat 工具可以轻松发送请求)
3. 解析代码简单
feeluown 以默认参数启动时,会相应的启动一个 TCP 服务,监听 23333 端口,
我们可以使用 fuo 协议,通过该 TCP 服务来与播放器通信。比如:
目前(3.0 版本),协议主要包含两个方面的内容:
1. 资源标识:比如 fuo://netease/songs/289530 标识了一首歌 2. 命令控制:播放、暂停、下一首、执行代码等控制命令
### 资源标识[¶](#id1)
feeluown 使用 design-library 来管理音乐提供方的资源,接入音乐库的资源都有一个特征,
这些资源都有一个唯一的资源标识符,我们称之为 **fuo uri** 。
对于大部分 fuo uri 来说,它们由几个部分组成:scheme, provider, type, identifier。
scheme 统一为 fuo。
```
fuo://{res_provider}/{res_type}/{res_identifier}
| | |
资源提供方 id 资源类型 资源id
```
举个例子:
* `fuo://local/songs/12` 代表的资源就是 **本地** (资源提供方) 提供的 id 为 **12** 的 **歌曲** (资源类别)。
* `fuo://netease/artists/46490` 代表的资源时 **网易云音乐** 提供的 id 为 **46490** 的 **歌手**
资源提供方都是以插件的形式存在,目前已知的资源插件有 本地音乐(local)、qq 音乐(qqmusic)、
虾米音乐(xiami)、网易云音乐(netease),我们可以在 [这里](https://github.com/feeluown/) 找到它们。
实现一个资源提供方也很简单,有兴趣的话,可以参考上面的例子。
目前支持的资源类型有:
* 用户: `fuo://{provider}/users/{identifier}`
* 歌曲: `fuo://{provider}/songs/{identifier}`
* 歌手: `fuo://{provider}/artists/{identifier}`
* 专辑: `fuo://{provider}/album/{identifier}`
* 歌词: `fuo://{provider}/songs/{identifier}/lyric`
注:除了这类标识资源的 uri,我们以后可能会有其它格式的 uri。
#### 带有简单描述的资源标识符[¶](#id3)
一个 uri 虽然可以标识一个资源,但是 uri 能够携带的信息往往不够,客户端拿到 uri 后,
肉眼识别不出来这是个什么东西,所以我们可以给 uri 附上一个简短描述,对于 **歌曲** 来说:
```
{song uri}{ }# {title}-{artists_name}-{album_name}-{duration_ms}
| | |
空格或者\t 分割符# 描述(所有字段均为可选,但必须按照顺序)
```
用户,歌手,专辑格式定义分别如下:
```
{user uri} # {name}
{artist uri} # {name}
{album uri} # {album_name}-{artist_name}
```
注:之后可以考虑支持带有复杂描述的资源标识符,可能类似(待研究和讨论)
```
fuo://{provider}/songs/{identifier} <<EOF
title: hello world
artist: fuo://{provider}/artists/{identifier} # {name}
album: fuo://{provider}/artists/{identifier} # {album_name}-{artists_name}
duration: 123.01
url: http://xxx/yyy.mp3 EOF
```
### RPC 服务[¶](#rpc)
fuo 协议定义了一些语义化的命令,客户端和 fuo (服务端) 可以通过一问一答的方式来进行通信。
(注:fuo 协议不依赖 TCP 传输,不过目前 feeluown daemon 启动时只提供了基于 tcp 的 transport。)
在客户端连接上服务端时,服务端会建立一个会话,并发送类似 `OK feeluown 1.0.0\n` 的信息,
客户端接收到这个消息后,就可以正常的发送请求了。在一个会话中,服务端会依次响应接收到的请求。
注:下面写的很多设计目前都没有实现,算是一个 RFC 草稿。
#### 消息(Message)[¶](#message)
和 HTTP 消息类似,fuo 消息是 feeluown 服务端和客户端之间交换数据的方式。
有两种类型的消息:
* 请求:由客户端发送用来触发服务端的一个动作
* 响应:服务端对请求的应答
#### 请求(Request)[¶](#request)
请求可以是一行文本,也可以是多行。
当请求为一行文本时,它以 `\r\n` 或者 `\n` 结束,具体结构如下:
```
{cmd} {param} [{options}] #: {req_options} \r\n
| | | | |
命令 参数 命令选项 分隔 请求选项
```
这一行,我们称之为 `request-line` 。其中,
命令是必须项。参数和命令选项都是视具体命令而定,有的是可选,有的则必须提供。
命令选项由 `[]` 包裹,选项都是 key-value 形式,比如 `[artist="linkin park",json=true]` 。
举几个例子:
```
# 查看服务端状态,只需要提供命令即可 status
# 播放一首歌曲,必须提供一个参数 play fuo://local/songs/1 play "晴天 - 周杰伦"
# 搜索关键字为晴天、歌手为周杰伦、来源为网易云的歌曲
# 搜索命令必须提供一个参数,命令选项可选
# (注:该功能目前还未实现,欢迎 PR)
search 晴天 [artist=周杰伦,source=netease]
```
请求选项由 `#:` 与命令选项分隔。而请求选项格式和命令选项格式是相同的,
都是 key=value 形式。在我们设计中,请求选项可能包含以下(目前均未实现,欢迎 PR):
* 输出格式: `format=json`
* 分页输出: `less=true` 可以简写为 `less`
举几个例子:
```
# 搜索纵观线关键字,结果可以分多次返回(设置了请求选项)
# 这里 less 请求选项是 less=true 的简写 search 纵贯线 #: less
# 使用 JSON 格式返回 search 纵贯线 #: format=json,less
```
请求消息也可以是多行文本,使用多行文本时,需要遵守下面的格式(类似 bash here document)
```
{cmd} [{options}] #: {req_options} <<EOF document
EOF
```
在多行文本表示的命令中,document 即是命令的参数,这种命令只能接收一个参数。
举个例子
```
# 让服务端执行代码 exec <<EOF print('hello, feeluown')
player.pause()
EOF
# 它基本相当于 exec "print('hello, feeluown'); player.pause()"
```
#### 响应(Response)[¶](#response)
响应体分为两个部分:头(`status-line`) 和内容(`body`),以 `\r\n` 为一个响应的结束。
**头** : 头是响应体的第一行。头中会告诉客户端请求成功或者失败,body 长度,请求选项。
客户端应该根据 length 信息来拆分响应。
```
# 成功 ACK ok {length} #: more,json
{body}
# 失败 ACK oops {length}
{err_type}: {err_msg}
# 示例 ACK ok 0
```
下面是目前支持的所有命令:
| 命令 | 意义 | 示例 |
| --- | --- | --- |
| status | 播放器当前状态 | `status` |
| play | 播放一首歌曲 | `play fuo://xiami/songs/1769099772` |
| pause | 暂停播放 | `pause` |
| resume | 恢复播放 | `resume` |
| toggle | 暂停/恢复 | `toggle` |
| stop | 停止播放 | `stop` |
| next | 下一首 | `next` |
| previous | 上一首 | `previous` |
| search | 搜索 | `search "我家门前有大海 - 张震岳"` |
| show | 展示资源详情 | `show fuo://xiami/songs/1769099772` |
| list | 显示当前播放列表 | `list` |
| clear | 清空当前播放列表 | `clear` |
| remove | 从播放列表移除歌曲 | `remove fuo://xiami/songs/1769099772` |
| add | 添加歌曲到播放列表 | `add fuo://xiami/songs/1769099772` |
| exec | 执行 Python 代码 | `exec <<EOF\n print('hello world') \nEOF` |
### 消息服务[¶](#id4)
除了 RPC 服务外,FeelUOwn 默认还会启动一个消息服务,其监听端口为 23334。FeelUOwn 会通过该服务来向外发送实时消息。
该服务的通信协议 `1.0` 版本设计非常简单,主要关注可读性,只用在“观看实时歌词”
的功能中。即客户端连接到 23334 端口后,发送 `sub topic.live_lyric` 加换行符到服务端,
客户端即可收到实时的歌词文本流。
在 FeelUown v3.8.3 版本之后,它提供 `2.0` 版本的通信协议。在 2.0 版本中,
消息有固定的结构,客户端可以根据该结构来拆分消息
```
MSG {topic} {body_length}\r\n
{body}
\r\n
```
客户端可以在连接建立之后,通过 `set` 指令来切换通信协议。2.0 版本协议更关注 机器的“可读性”,复用了 RPC 服务的 2.0 版本通信协议的格式设计。
相关专业术语[¶](#id1)
---
下面是 feeluown 中项目使用的一些专业名词,这些专业名词会体现在文档、代码注释、
变量命名等各方面。
library 音乐库
provider 资源提供方,提供媒体媒体资源的平台。
比如 YouTube、网易云音乐、虾米等。
source 资源提供方标识。
在 feeluown 中,歌曲、歌手等资源都有一个 source 属性来标识该资源的来源。
media 媒体实体资源
Media 关注的点是“可播放”的实体资源。在 feeluown 中,Media 有两种类型,
Audio 和 Video (以后可能会新增 Image),它用来保存资源的元信息,
对于 Audio 来说,有 bitrate, channel count 等元信息。对于 Video 来说,有 framerate, codec 等信息。
和 Song 不一样,Song 逐重强调一首歌的元信息部分,比如歌曲名称、歌手等。
理论上,我们可以从 Media(Audio) 中解析出部分 Song 信息。
FeelUOwn 项目名,泛指 FeelUOwn 整个项目。从 GitHub repo 角度看,它指 github.com/feeluown 目录下所有的项目。
feeluown 包名(Python 包),程序名。从 GitHub repo 角度看,
它特指 github.com/feeluown/feeluown 这个项目。
fuo 程序名
一些调研与思考[¶](#id1)
---
### 怎样较好的抽象不同的资源提供方?[¶](#research-model)
feeluown 的一个主要目标就是将各个资源进行抽象,统一上层使用资源的方式。
但各个资源提供方提供的 API 差异较大,功能差别不小,比如网易云音乐 会提供批量接口(根据歌曲 id 批量获取歌曲详情),而虾米和 QQ 音乐就没有 类似接口,这给 feeluown 的实现和设计带来了挑战。
#### 问题一:同一平台,不同接口的返回信息有的比较完整,有的不完整[¶](#id3)
我们以网易云音乐的 专辑详细信息接口 和 搜索接口 为例。
它的搜索接口返回的专辑信息大致如下:
```
"album": {
"artist": {
"id": 0,
"alias": [],
"img1v1": 0,
"name": "",
"picUrl": null,
"picId": 0,
},
"id": 2960228,
"name": "\u5218\u5fb7\u534e Unforgettable Concert 2010",
"picId": 2540971374328644,
...
}
```
它没有专辑封面的链接,也没有专辑歌曲、歌手信息也不完整信息。
而专辑详细信息接口中,它就有 songs , artists , picUrl 等信息。
面对这个问题,目前有两种解决方案:
1. 将搜索接口返回的 Album 定义为 BriefAlbumModel,将详细接口返回的定义为 AlbumModel
* pros: 清晰明了,两者有明显的区分
* cons: 多一个 Model 就多一个概念,上层要对两者进行区分,代码更复杂 2. 定义一个 AlbumModel,创建 Model 实例的时候不要求所有字段都有值,
一些字段的值在之后在被真正用到的时候再自动获取。
* cons: 比较隐晦
* cons: 上层不也方便确认哪些字段是已经有值了,哪些会在调用的时候获取
Note
**UPDATE 2019-05-04**: 第二种方案使用已经半年了,我们发现它有一个让人头疼的问题:
类似 `model.xxx` 这样的代码可能会导致整个线程 block,而对于一个 GUI 程序来说,
block(主)线程是不可接受的。为了不阻塞,我们使用的方案是让
`model.xxx` 这个操作跑在另一个线程中,这样的代码目前在 `songs_table_container.py`
中有较多使用。但这样的代码看起来很丑,性能也比较差(见 `research/bench_getattr.py` )。
另外,尽管我们在开发 feeluown 的时候可以额外的注意,让程序不因此卡住,
但是其它插件开发者并不一定完全了解这个机制,很容易写成“坏”的代码。
为了让整体代码更简单,目前使用的是第二种方案。上层假设 identifier/name 等字段是一开始就有了,url/artists 等字段需要之后调用接口才会有值。
(尽管这种方案看起来也有明显的缺点,但目前看来可以接受,也没想到更好的方法。欢迎大家讨论新的方案)。
#### 问题二:不同平台,同一接口的返回信息有的比较完整,有的不完整[¶](#id4)
在虾米音乐中,在获取歌曲详细信息的时候,就可以获取这歌曲的播放链接。
但是在网易云音乐,需要单独调用一个接口来获取歌曲的播放链接。
在虾米音乐的 API 中,要获取一个歌手的所有信息,我们需要调用它的多个接口:
一个是歌手详情接口;另一个是歌手歌曲详情接口;还有歌手专辑接口等。
#### 问题三:平台能力方面和开发体验[¶](#id5)
另一方面,就算各音乐平台都提供一样的 API,开发者在开发相关插件的时候,
也不一定会一次性把所有功能都完成,那时,也会存在一个问题:
A 插件有某功能,但是 B 插件没有。
所以,当 B 插件没有该功能的时候,系统内部应该怎样处理?又怎样将该 问题呈现给用户呢?
举个例子,对于网易云音乐来说,它的批量获取歌曲功能可以这样实现:
```
NeteaseSongModel.list(song_ids): -> list<SongModel>
```
但是我们不能给虾米音乐和 QQ 音乐实现这样的功能,那怎么办,
目前有如下方法:
```
1. XiamiSongModel.list(song_ids): -> raise NotSupportedError 2. XiamiSongModel.list -> AttributeError # 好像不太优雅 3. XiamiSongModel.allow_batch -> 加一个标记字段
```
目前使用的是第三种方案,加一个标记字段, `allow_get` 和 `allow_batch` 。
对各种哲学的一些认识[¶](#id1)
---
鉴于在生活的不同阶段接对同一问题的理解一般都会有点不一样,
所以我把自己的想法加上时间戳。
### Unix 哲学[¶](#unix)
2018-7-15 @cosven:
> 1. 只做一件事,并把它做好
> 2. 与其它程序可以良好组合
### EAFP or LBYL[¶](#eafp-or-lbyl)
2020-12-26 @cosven:
在 FeelUOwn 中,每个 [provider](index.html#term-provider) 提供的能力,对它们进行抽象时,有两种方式
1. 假设 provider 提供了我们需要的所有能力,没有的时候,报错 2. 要求提供方声明自己具有哪些能力,[library](index.html#term-library) 调用时先判断
FeelUOwn 大部分情况选用的是方式 2,举个例子,FeelUOwn 如果知道 provider 没有 A 功能,
就可以在界面上将这个功能的按钮置位灰色。而当该这功能对界面展示影响甚微时,
会考虑使用方式 1。
代码风格[¶](#id1)
---
### 注释[¶](#id2)
* 注释统一用英文(老的中文注释应该逐渐更改)
* docstring 使用 [Sphinx docstring format](https://sphinx-rtd-tutorial.readthedocs.io/en/latest/docstrings.html#the-sphinx-docstring-format)
* FIXME, TODO, and HELP
- FIXME: 开发者明确知道某个地方代码不优雅
- HELP: 开发者写了一段自己不太理解,但是可以正确工作的代码
- TODO: 一些可以以后完成的事情
- 暂时不推荐使用 NOTE 标记
### 命名[¶](#id3)
### 测试[¶](#id4)
* feeluown 包相关代码都应该添加相应测试
### 错误处理[¶](#id5)
### 日志和提示[¶](#id6)
### 特殊风格[¶](#id7)
* 标记了 alpha 的函数和类,它们的设计都是不确定的,外部应该尽少依赖
### 类[¶](#id8)
* Qt Widget 的子类的 UI 相关设置初始化应该放在 `_setup_ui` 函数中
* 信号的 slot 方法应该设为 protected 方法
* 类的 public 方法放在类的前面,protected 和 private 方法放在类的最后面,
`_setup_ui` 函数除外
* QWidget 子类最好不要有 async 方法,因为目前无法很好的为它编写相关单元测试
贡献力量[¶](#id1)
---
各位老师、同鞋,非常高兴你能点开这个文档~
在说怎样贡献代码之前,我们先啰嗦一下 FeelUOwn 项目的 **主要目标** 。
### 项目背景及目标[¶](#id2)
FeelUOwn 这个项目从 2015 年初开发到现在,已经 4 年有余,
从功能角度看,它已经具备最基本的功能,如本地歌曲扫瞄、歌曲播放、MV 播放、歌词。
基本模块也比较齐全,配置、插件、音乐库、远程控制等。
一开始,FeelUOwn 解决的问题是 Linux 上听在线音乐难。和现在百花齐放的生态不一样,
当年(=。=),Linux 上大部分播放器都只支持播放本地音乐,并且经常出现乱码现象。
我了解到的当时能听在线音乐的播放器有 deepin-music, 虾米电台等播放器。所以,
自己选择开发这个软件,更多历史请看这几篇博客 [文章1](http://cosven.me/blogs/57) 、 [文章2](http://cosven.me/blogs/58) 。
如今,当时的问题已经不复存在,FeelUOwn 主要目标之一是提升用户听音乐的体验。
具体来说,主要体现以下几个方面:
1. 让用户听任何想听的音乐 2. 辅助用户能够发现、享受音乐
另外,FeelUOwn 也尽力成为一个程序员最友好的音乐播放器。这主要体现在这些点:
1. 可扩展性强 2. 项目工程化 3. 尊重 Unix 习俗
### 可以做哪些贡献?[¶](#id3)
我们为 FeelUOwn 项目做贡献时,也都会围绕这些目标来进行。
首先,如果大家自己觉得播放器缺少了功能或者特性,可以提 Issue,有好的想法,
也可以提,我们非常期待大家的建议和想法。如果大家自己有需求,并且自己有能力动手实现,
我们也建议先在 Issue 上或者交流群进行简单讨论,然后进行代码实现。
用户会通过 Issue 或者交流群来提出的需求或想法,我们会把它们都收集,
记录在 [FeelUOwn GitHub 项目](https://github.com/cosven/FeelUOwn/projects/5) 中,其中有大大小小的 TODO,
大家如果对某个 TODO 有兴趣,就可以进行相应的开发,
开发之前最好可以在 Telegram 交流群中吼一声,或者和管理员(目前为 @cosven) 说一声,
这样,这个 TODO 会被移动到 In progress 这一栏中去,避免大家做重复的工作。
除了功能方面,我们也特别欢迎大家对项目代码进行改进、让项目更加工程化。
目前,FeelUOwn 有一些设计不优雅、或者性能较差的代码,一部分是我们可以发现的,
我们已经将其标记为 FIXME / TODO / DOUBT ;另外,有些设计不好的地方,
我们还没有特别明确(比如 PyQt 在 FeelUOwn 中的使用),大家如果对这些问题有兴趣,
欢迎 PR!另外,工程化方面,FeelUOwn 的文档、单元测试、打包等都可以更加完善,
欢迎感兴趣的朋友根据自己的爱好进行选择。
### 如何做贡献?[¶](#id4)
对于一些小的 bugfix, feature 开发, 文档补全等,大家可以自己动手,然后 PR。
对于大的特性开发或者改动,建议大家先将自己的想法整理成文字,提在 Issue 上,
和大家同步并讨论,之后再动手开发。
如果需要进行修改代码(包括文档等),可以参考 [本地开发快速上手](index.html#document-dev_quickstart) ,
代码风格请参考 [代码风格](index.html#document-coding_style) ,一些 FeelUOwn 架构设计相关的决策,可以参考 [程序架构](index.html#document-arch) 和 [接口参考手册](index.html#document-api) 等文档。
最后值得一提的是,我们有一个开发者/用户交流群(邀请链接在 [README](https://github.com/cosven/feeluown) 中),大家可以加入群里,
有任何 *相关* 或者 [有意义的问题](https://zh.wikipedia.org/wiki/%E6%8F%90%E9%97%AE%E7%9A%84%E6%99%BA%E6%85%A7) ,都可以在群里进行讨论,有任何疑问,
也可以在群里沟通。感谢大家为 FeelUOwn 做出的贡献! |
machinelearning101 | readthedoc | Unknown | Machine Learning 101
Aug 19, 2019
Contents 1.1 Supervised Learnin... 3 1.2 Unsupervised Learnin... 3 1.3 Semi-Supervised Learnin... 4 1.4 Key Term... 5 2.1 Regression Algorithm... 9 2.2 Instance-based Algorithm... 10 2.3 Regularization Algorithm... 10 2.4 Decision Tree Algorithm... 10 2.5 Bayesian Algorithm... 12 2.6 Clustering Algorithm... 13 2.7 Association Rule Learning Algorithm... 14 2.8 Dimensionality Reduction Algorithm... 14 2.9 Ensemble Algorithm... 15 3.1 Bia... 17 3.2 Varianc... 17 3.3 Difference... 17 3.4 Mathematical Representatio... 18 3.5 Bias-Variance Tradeof... 19 4.1 Covarianc... 21 4.2 Correlatio... 22 4.3 Differenc... 22 5.1 Mean Absolute Erro... 25 5.2 Mean Squared Erro... 25 5.3 Log Los... 26 5.4 Confusion Matri... 26 5.5 Classification Accurac... 28 5.6 Precisio... 28 5.7 Recall or Sensitivit... 28
i
5.8 F1 Scor... 29 5.9 Receiver operating characteristic (ROC... 29 5.10 AUC: Area Under the ROC Curv... 30 6.1 Overfittin... 33 6.2 Underfittin... 34 6.3 Exampl... 34 7.1 Data Splittin... 36 7.2 Validatio... 36 7.3 Cost functio... 37 7.4 High Bias and High Varianc... 37 7.5 Regularization... 38 7.6 Early Stoppin... 40 7.7 Hyperparameter Optimizatio... 40 8.1 Gradien... 43 8.2 Cost Functio... 43 8.3 Metho... 44 8.4 Algorith... 45 8.5 Learning Rat... 46 8.6 Convergenc... 47 8.7 Types of Gradient Descen... 47 9.1 Basic Model... 52 9.2 Selecting Mode... 54 10.1 Ordinary Least Sqaur... 58 11.1 Ordinary Least Sqaur... 61 11.2 Gradient Descen... 64 12.1 Least Squared Residua... 68 13.1 Ordinary Least Squar... 71 ii
Machine Learning 101 Contents 1
Machine Learning 101 2 Contents
CHAPTER 1
Introduction Machine learning is a subset of artificial intelligence in the field of computer science that often uses statistical tech-
niques to give computers the ability to learn (i.e., progressively improve performance on a specific task) with data,
without being explicitly programmed. Machine learning is the design and study of software artifacts that use past ex-
perience to make future decisions. It is the study of programs that learn from data. The fundamental goal of machine learning is to generalize, or to induce an unknown rule from examples of the rule’s application. Machine learning systems are often described as learning from experience either with or without supervision from humans.
The Basic three different learning styles in machine learning algorithms:
1.1 Supervised Learning A model is prepared through a training process in which it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data. A program predicts an output for an input by learning from pairs of labeled inputs and outputs, the program learns from examples of the right answers.
1. Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a
time.
2. Example problems are classification and regression.
3. Example algorithms include Logistic Regression and the Back Propagation Neural Network.
1.2 Unsupervised Learning A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may be through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity. A program does not learn from labeled data. Instead, it attempts to discover patterns in the data.
1. Input data is not labeled and does not have a known result.
2. Example problems are clustering, dimensionality reduction and association rule learning.
Machine Learning 101
3. Example algorithms include: the Apriori algorithm and k-Means.
1.3 Semi-Supervised Learning There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions. Semi-supervised learning problems, make use of both supervised and unsupervised data; these problems are located on the spectrum between supervised and unsupervised learning
1. Input data is a mixture of labeled and unlabeled examples.
2. Example problems are classification and regression.
3. Example algorithms are extensions to other flexible methods that make assumptions about how to model the
unlabeled data.
Machine Learning 101 1.4 Key Terms 1.4.1 Model
A machine learning model can be a mathematical representation of a real-world process. To generate a
machine learning model you will need to provide training data to a machine learning algorithm to learn
from.
1.4.2 Algorithm
Machine Learning algorithm is the hypothesis set that is taken at the beginning before the training starts
with real-world data. When we say Linear Regression algorithm, it means a set of functions that define
similar characteristics as defined by Linear Regression and from those set of functions we will choose one
function that fits the most by the training data.
1.4.3 Training
While training for machine learning, you pass an algorithm with training data. The learning algorithm
finds patterns in the training data such that the input parameters correspond to the target. The output of the
training process is a machine learning model which you can then use to make predictions. This process is
also called “learning”.
1.4.4 Regression
Regression techniques are used when the output is real-valued based on continuous variables. For exam-
ple, any time series data. This technique involves fitting a line.
1.4.5 Classification
In classification, you will need to categorize data into predefined classes. For example, an email can either
be ‘spam’ or ‘not spam’.
Machine Learning 101 1.4.6 Target
The target is whatever the output of the input variables. It could be the individual classes that the input
variables maybe mapped to in case of a classification problem or the output value range in a regression
problem. If the training set is considered then the target is the training output values that will be consid-
ered.
1.4.7 Feature
Features are individual independent variables that act as the input in your system. Prediction models use
features to make predictions. New features can also be obtained from old features using a method known
as ‘feature engineering’. More simply, you can consider one column of your data set to be one feature.
Sometimes these are also called attributes. And the number of features are called dimensions.
1.4.8 Label
Labels are the final output. You can also consider the output classes to be the labels. When data scientists
speak of labeled data, they mean groups of samples that have been tagged to one or more labels.
1.4.9 Overfitting
An important consideration in machine learning is how well the approximation of the target function that
has been trained using training data, generalizes to new data. Generalization works best if the signal or the
sample that is used as the training data has a high signal to noise ratio. If that is not the case, generalization
would be poor and we will not get good predictions. A model is overfitting if it fits the training data too
well and there is a poor generalization of new data.
1.4.10 Regularization
Regularization is the method to estimate a preferred complexity of the machine learning model so that the
model generalizes and the over-fit/under-fit problem is avoided. This is done by adding a penalty on the
different parameters of the model thereby reducing the freedom of the model.
Machine Learning 101 1.4.11 Parameter and Hyper-Parameter
Parameters are configuration variables that can be thought to be internal to the model as they can be
estimated from the training data. Algorithms have mechanisms to optimize parameters. On the other
hand, hyperparameters cannot be estimated from the training data. Hyperparameters of a model are set
and tuned depending on a combination of some heuristics.
Citations References
1. Machine Learning Types
Machine Learning 101 8 Chapter 1. Introduction
CHAPTER 2
Learning Models The process of training an ML model involves providing an ML algorithm (that is, the learning algorithm) with training data to learn from. The term ML model refers to the model artifact that is created by the training process.
The training data must contain the correct answer, which is known as a target or target attribute. The learning algorithm finds patterns in the training data that map the input data attributes to the target (the answer that you want to predict),
and it outputs an ML model that captures these patterns. You can then use the ML model to get predictions on new data for which you do not know the target.
Organizing machine learning algorithms is useful because it forces you to think about the roles of the input data and the model preparation process and select one that is the most appropriate for your problem in order to get the best result. Algorithms are often grouped by similarity in terms of their function.
2.1 Regression Algorithms Regression is concerned with modeling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model. Regression methods are a workhorse of statistics and have been co-opted into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm. Really, regression is a process.
1. Ordinary Least Squares Regression (OLSR)
2. Linear Regression
3. Logistic Regression
4. Stepwise Regression
5. Multivariate Adaptive Regression Splines (MARS)
6. Locally Estimated Scatterplot Smoothing (LOESS)
Machine Learning 101 2.2 Instance-based Algorithms Instance-based learning model is a decision problem with instances or examples of training data that are deemed important or required to the model. Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason,
instance-based methods are also called winner-take-all methods and memory-based learning. Focus is put on the representation of the stored instances and similarity measures used between instances.
1. k-Nearest Neighbor (kNN)
2. Learning Vector Quantization (LVQ)
3. Self-Organizing Map (SOM)
4. Locally Weighted Learning (LWL)
2.3 Regularization Algorithms An extension made to another method (typically regression methods) that penalizes models based on their complexity,
favoring simpler models that are also better at generalizing. regularization algorithms
1. Ridge Regression
2. Least Absolute Shrinkage and Selection Operator (LASSO)
3. Elastic Net
4. Least-Angle Regression (LARS)
2.4 Decision Tree Algorithms Decision tree methods construct a model of decisions made based on actual values of attributes in the data. Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data
Machine Learning 101 2.4. Decision Tree Algorithms 11
Machine Learning 101 for classification and regression problems. Decision trees are often fast and accurate and a big favorite in machine learning.
1. Classification and Regression Tree (CART)
2. Iterative Dichotomiser 3 (ID3)
3. C4.5 and C5.0 (different versions of a powerful approach)
4. Chi-squared Automatic Interaction Detection (CHAID)
5. Decision Stump
6. M5
7. Conditional Decision Trees 2.5 Bayesian Algorithms Bayesian methods are those that explicitly apply Bayes’ Theorem for problems such as classification and regression.
In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. Even if these features depend on each other or upon the existence of the other features,
all of these properties independently contribute to the probability of the response variable belonging to a particular value.
1. Naive Bayes
2. Gaussian Naive Bayes
3. Multinomial Naive Bayes
4. Averaged One-Dependence Estimators (AODE)
5. Bayesian Belief Network (BBN)
6. Bayesian Network (BN)
Machine Learning 101 2.6 Clustering Algorithms Clustering, like regression, describes the class of problem and the class of methods. Clustering methods are typically organized by the modeling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.
1. k-Means
2. k-Medians
3. Expectation Maximisation (EM)
4. Hierarchical Clustering
Machine Learning 101 2.7 Association Rule Learning Algorithms Association rule learning methods extract rules that best explain observed relationships between variables in data.
These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organization.
1. Apriori algorithm
2. Eclat algorithm 2.8 Dimensionality Reduction Algorithms Like clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarize or describe data using less information. This can be useful to visualize dimensional data or to simplify data which can then be used in a supervised learning method. Many of these methods can be adapted for use in classification and regression.
1. Principal Component Analysis (PCA)
2. Principal Component Regression (PCR)
3. Partial Least Squares Regression (PLSR)
4. Sammon Mapping
5. Multidimensional Scaling (MDS)
6. Projection Pursuit
7. Linear Discriminant Analysis (LDA)
8. Mixture Discriminant Analysis (MDA)
9. Quadratic Discriminant Analysis (QDA)
Machine Learning 101
10. Flexible Discriminant Analysis (FDA)
2.9 Ensemble Algorithms Ensemble methods are models composed of multiple weaker models that are independently trained and whose predic-
tions are combined in some way to make the overall prediction. Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.
1. Boosting
2. Bootstrapped Aggregation (Bagging)
3. AdaBoost
4. Stacked Generalization (blending)
5. Gradient Boosting Machines (GBM)
6. Gradient Boosted Regression Trees (GBRT)
7. Random Forest Citations
References
1. Machine Learning Algorithms
Machine Learning 101 16 Chapter 2. Learning Models
CHAPTER 3
Bias and Variance 3.1 Bias It is the difference between the average prediction of our model and the correct value which we are trying to predict.
Model with high bias pays very little attention to the training data and oversimplifies the model. It always leads to high error on training and test data.
3.2 Variance It is the variability of model prediction for a given data point or a value which tells us spread of our data. Model with high variance pays a lot of attention to training data and does not generalize on the data which it hasn’t seen before.
As a result, such models perform very well on training data but has high error rates on test data.
3.3 Differences
1. Bias is the algorithm’s tendency to consistently learn the wrong thing by not taking into account all the infor-
mation in the data (underfitting). Variance is the algorithm’s tendency to learn random things irrespective of the
real signal by fitting highly flexible models that follow the error/noise in the data too closely (overfitting).
2. Bias is also used to denote by how much the average accuracy of the algorithm changes as input/training data
changes. Similarly, Variance is used to denote how sensitive the algorithm is to the chosen input data. Bias is
prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered
to be unfair. Variance is the state or fact of disagreeing or quarreling.
Machine Learning 101 3.4 Mathematical Representation Let the variable we are trying to predict as 𝑦 and other covariates as 𝑥. We assume there is a relationship between the two such that
𝑦 = 𝑓 (𝑥) + 𝑒 The expected squared error at a point 𝑥 is:
𝐸𝑟𝑟𝑜𝑟(𝑥) = 𝐸[(𝑦 − 𝑓ˆ(𝑥))2 ]
The 𝐸𝑟𝑟(𝑥) can be further decomposed as :
𝐸𝑟𝑟𝑜𝑟(𝑥) = Bias2 + Variance + IrreducibleError
= (E[𝑓ˆ(𝑥)] − 𝑓 (𝑥))2 + E[(𝑓ˆ(𝑥) − E[𝑓ˆ(𝑥)])2 ] + 𝜎 2
𝑒 where
• 𝑒 is the error term and it’s normally distributed with a mean of 0.
• 𝑓 = Target function
• 𝑓ˆ = estimation of Target function 3.4.1 Irreducible error
𝐼𝑟𝑟𝑒𝑑𝑢𝑐𝑖𝑏𝑙𝑒𝐸𝑟𝑟𝑜𝑟 = 𝜎𝑒2 (3.2)
The above error can’t be reduced by creating good models. It is a measure of the amount of noise in our data. Here it is important to understand that no matter how good we make our model, our data will have certain amount of noise or irreducible error that can not be removed.
3.4.2 Bias error
𝐵𝑖𝑎𝑠𝐸𝑟𝑟𝑜𝑟 = 𝐸[𝑓ˆ(𝑥)]𝑓 (𝑥) (3.3)
The above equation is little confusing because we can learn only one estimate for the target function 𝑓ˆ using the data we sampled, but the above equation takes expectation for 𝑓ˆ. Assume that we sampled a data for 𝑛 times and make a model for each sampled data. We can’t expect same data every time due to irreducible error influence in the target function. As the data changes every time, our estimation of target function also change every time.
Note: Bias will be zero if, 𝐸[𝑓ˆ(𝑥)] = 𝑓 (𝑥). This is not possible if we make assumptions to learn the target function.
Most of the parametric methods make assumption(s) to learn a target function. The methods which make more as-
sumptions to learn a target function are high biased method. Similarly, the methods which make very less assumptions to learn a target function are low biased method.
• Examples of low-bias machine learning algorithms: Decision Trees, k-Nearest Neighbors and Support Vector
Machines.
• Examples of high-bias machine learning algorithms: Linear Regression, Linear Discriminant Analysis and Lo-
gistic Regression
Machine Learning 101 3.4.3 Variance error
𝑉 𝑎𝑟𝑖𝑎𝑛𝑐𝑒𝐸𝑟𝑟𝑜𝑟 = 𝐸[(𝑓ˆ(𝑥)𝐸[𝑓ˆ(𝑥)])2 ] (3.4)
As mentioned before, for different data set, we will get different estimation for the target function. The variance error measure how much our target function (𝑓ˆ) would differ if a new training data was used. For example, let the target fucntion be given as 𝑓 = 𝛽0 + 𝛽1 𝑋 ; if we use regression method to learn the given target function and assume the same functional form to estimate the target function, then the number of possible estimated function will be limited.
Even though we get different (𝑓ˆ) for different training data, our search space is limited due to functional form.
Note: If we use K-Nearest Neighbor algorithm, KNN algorithm search the estimation for target function in large dimensional space.
• If we sample different training data for the same variables and the estimated function suggests small changes
from the previous (𝑓ˆ) , then our model is low variance one.
• If we sample different training data for the same variables and the estimated function suggests large changes
from the previous (𝑓ˆ) , then our model is high variance one.
3.5 Bias-Variance Tradeoff If our model is too simple and has very few parameters then it may have high bias and low variance. On the other hand if our model has large number of parameters then it’s going to have high variance and low bias. So we need to find the right/good balance without overfitting and underfitting the data. This tradeoff in complexity is why there is a tradeoff between bias and variance. An algorithm can’t be more complex and less complex at the same time.
At its root, dealing with bias and variance is really about dealing with over- and under-fitting. Bias is reduced and variance is increased in relation to model complexity. As more and more parameters are added to a model, the complexity of the model rises and variance becomes our primary concern while bias steadily falls. For example, as more polynomial terms are added to a linear regression, the greater the resulting model’s complexity will be. In other words, bias has a negative first-order derivative in response to model complexity while variance has a positive slope.
Understanding bias and variance is critical for understanding the behavior of prediction models, but in general what you really care about is overall error, not the specific decomposition. The sweet spot for any model is the level of
Machine Learning 101 complexity at which the increase in bias is equivalent to the reduction in variance.
𝑑𝐵𝑖𝑎𝑠 𝑑𝑉 𝑎𝑟𝑖𝑎𝑛𝑐𝑒
𝑑𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦 𝑑𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦 If our model complexity exceeds this sweet spot, we are in effect over-fitting our model; while if our complexity falls short of the sweet spot, we are under-fitting the model. In practice, there is not an analytical way to find this location.
Instead we must use an accurate measure of prediction error and explore differing levels of model complexity and then choose the complexity level that minimizes the overall error. A key to this process is the selection of an accurate error measure as often grossly inaccurate measures are used which can be deceptive.
Citations References
1. Bias Variance
CHAPTER 4
Covariance and Correlation Covariance and correlation describe how two variables are related.
• Variables are positively related if they move in the same direction.
• Variables are inversely related if they move in opposite directions.
Both covariance and correlation indicate whether variables are positively or inversely related. Correlation also tells you the degree to which the variables tend to move together.
4.1 Covariance Covariance measures how two variables move with respect to each other and is an extension of the concept of variance
(which tells about how a single variable varies). It can take any value from −∞ to +∞
• Higher this value, more dependent is the relationship. A positive number signifies positive covariance and
denotes that there is a direct relationship. Effectively this means that an increase in one variable would also lead
to a corresponding increase in the other variable provided other conditions remain constant.
• On the other hand, a negative number signifies negative covariance which denotes an inverse relationship be-
tween the two variables. Though covariance is perfect for defining the type of relationship, it is bad for inter-
preting its magnitude.
∑︀𝑛
(𝑥𝑖 − 𝑥
¯)(𝑦𝑖 − 𝑦¯)
where,
• x = the independent variable
• y = the dependent variable
• n = number of data points in the sample
• 𝑥
¯ = the mean of the independent variable x
Machine Learning 101
• 𝑦¯ = the mean of the dependent variable y 4.2 Correlation Correlation is another way to determine how two variables are related. In addition to telling you whether variables are positively or inversely related, correlation also tells you the degree to which the variables tend to move together.
Correlation standardizes the measure of interdependence between two variables and, consequently, tells you how closely the two variables move. The correlation measurement, called a correlation coefficient, will always take on a value between 1 and ˘1
• If the correlation coefficient is 1, the variables have a perfect positive correlation. This means that if one variable
moves a given amount, the second moves proportionally in the same direction. A positive correlation coefficient
less than one indicates a less than perfect positive correlation, with the strength of the correlation growing as the
number approaches one.
• If correlation coefficient is 0, no relationship exists between the variables. If one variable moves, you can make
no predictions about the movement of the other variable; they are uncorrelated.
• If correlation coefficient is ˘1, the variables are perfectly negatively correlated (or inversely correlated) and move
in opposition to each other. If one variable increases, the other variable decreases proportionally. A negative
correlation coefficient greater than ˘1 indicates a less than perfect negative correlation, with the strength of the
correlation growing as the number approaches ˘1.
COVx,y
𝜎𝑥 𝜎𝑦 where,
• 𝜎𝑥 = sample standard deviation of the random variable x
• 𝜎𝑦 = sample standard deviation of the random variable y 4.3 Difference
1. Meaning
• Covariance is an indicator of the extent to which two random variables are dependent on each other. A
higher number denotes higher dependency.
• Correlation is an indicator of how strongly these two variables are related provided other conditions are
constant. A maximum value is +1 denoting perfect dependent relationship.
2. Relationship
• Correlation can be deduced from covariance.
• Correlation provides a measure of covariance on a standard scale. It is deduced by dividing the calculated
covariance with standard deviation.
3. Value
• The value of covariance lies in the range of −∞ and +∞.
• Correlation is limited to values between the range −1 and +1.
4. Scalability
Machine Learning 101
• Correlation is not affected by a change in scales or multiplication by a constant.
• Covariance affects Correlation
5. Units
• Covariance has a definite unit as it is deduced by the multiplication of two numbers and their units.
• Correlation is a unitless absolute number between −1 and +1 including decimal values.
Citations References
1. Covariance Correlation
Machine Learning 101 24 Chapter 4. Covariance and Correlation
CHAPTER 5
Model Metrics Evaluating your machine learning algorithm is an essential. Your model may give you satisfying results when eval-
uated using a metric say accuracy_score but may give poor results when evaluated against other metrics such as logarithmic_loss or any other such metric. Most of the times we use classification accuracy to measure the perfor-
mance of our model, however it is not enough to truly judge our model. Different performance metrics are used to evaluate different Machine Learning Algorithms. We can use classification performance metrics such as Log-Loss,
Accuracy, AUC(Area under Curve) etc. Another example of metric for evaluation of machine learning algorithms is precision, recall, which can be used for sorting algorithms primarily used by search engines.
The metrics that you choose to evaluate your machine learning model is very important. Choice of metrics influences how the performance of machine learning algorithms is measured and compared.
5.1 Mean Absolute Error Mean Absolute Error(MAE) is the average of the difference between the Original Values and the Predicted Values. It gives us the measure of how far the predictions were from the actual output. However, they don’t gives us any idea of the direction of the error i.e. whether we are under predicting the data or over predicting the data.
If 𝑦ˆ𝑖 is the predicted value of the 𝑖𝑡ℎ sample, and 𝑦𝑖 is the corresponding true value, then the mean absolute error
(MAE) estimated over 𝑛samples is defined as:
MAE(𝑦, 𝑦ˆ) = |𝑦𝑖 − 𝑦ˆ𝑖 | (5.1)
5.2 Mean Squared Error Mean Squared Error(MSE) is quite similar to Mean Absolute Error, the only difference being that MSE takes the average of the square of the difference between the original values and the predicted values. The advantage of MSE being that it is easier to compute the gradient, whereas Mean Absolute Error requires complicated linear programming tools to compute the gradient. As, we take square of the error, the effect of larger errors become more pronounced then smaller error, hence the model can now focus more on the larger errors.
Machine Learning 101 If 𝑦ˆ𝑖 is the predicted value of the 𝑖𝑡ℎ sample, and 𝑦𝑖 is the corresponding true value, then the mean absolute error
(MAE) estimated over 𝑛samples is defined as:
MSE(𝑦, 𝑦ˆ) = (𝑦𝑖 − 𝑦ˆ𝑖 )2 (5.2)
5.3 Log Loss Log loss, also called logistic regression loss or cross-entropy loss, is defined on probability estimates. It is commonly used in (multinomial) logistic regression and neural networks, as well as in some variants of expectation-maximization,
and can be used to evaluate the probability outputs of a model instead of its discrete predictions.
5.3.1 Binary Classification For binary classification with a true label 𝑦 ∈ {0, 1} and a probability estimate 𝑝 = Pr(𝑦 = 1), the log loss per sample is the negative log-likelihood of the classifier given the true label:
𝐿log (𝑦, 𝑝) = − log Pr(𝑦|𝑝) = −(𝑦 log(𝑝) + (1 − 𝑦) log(1 − 𝑝)) (5.3)
5.3.2 Multiclass Classification Let the true labels for a set of samples be encoded as a 1-of-K binary indicator matrix 𝑌 , i.e., 𝑦𝑖,𝑘 = 1 if sample 𝑖 has label 𝑘 taken from a set of 𝐾 labels. Let 𝑃 be a matrix of probability estimates, with 𝑝𝑖,𝑘 = Pr(𝑡𝑖,𝑘 = 1). Then the log loss of the whole set is:
𝐿log (𝑌, 𝑃 ) = − log Pr(𝑌 |𝑃 ) = − 𝑦𝑖,𝑘 log 𝑝𝑖,𝑘 (5.4)
where,
• 𝑦𝑖,𝑘 , indicates whether sample 𝑖 belongs to class 𝑘 or not
• 𝑝𝑖,𝑘 , indicates the probability of sample 𝑖 belonging to class 𝑗 Note: To see how this generalizes the binary log loss given above, note that in the binary case, 𝑝𝑖,0 = 1 − 𝑝𝑖,1 and 𝑦𝑖,0 = 1 − 𝑦𝑖,1 , so expanding the inner sum over 𝑦𝑖,𝑘 ∈ {0, 1} gives the binary log loss. Log Loss has no upper bound and it exists on the range [0, ∞). Log Loss nearer to 0 indicates higher accuracy, whereas if the Log Loss is away from 0 then it indicates lower accuracy. In general, minimizing Log Loss gives greater accuracy for the classifier.
In simpler terms, Log Loss works by penalizing the false classifications. It works well for multi-class classification.
5.4 Confusion Matrix The Confusion matrix is one of the most intuitive and easiest metric used for finding the correctness and accuracy of the model. It is used for Classification problem where the output can be of two or more types of classes. It in itself is not a performance measure as such, but almost all of the performance metrics are based on Confusion Matrix.
Lets say we have a classification problem where we are predicting if a person has cancer or not.
Machine Learning 101
• P : a person tests positive for cancer
• N : a person tests negative for cancer The confusion matrix, is a table with two dimensions (“Actual” and “Predicted”), and sets of “classes” in both dimen-
sions. Our Actual classifications are rows and Predicted ones are Columns.
5.4.1 Key Terms
1. True Positives (TP):
• True positives are the cases when the actual class of the data point was Positive and the predicted is also
Positive. Ex: The case where a person is actually having cancer (Positive) and the model classifying his
case as cancer (Positive) comes under True positive.
2. True Negatives (TN):
• True negatives are the cases when the actual class of the data point was Negative and the predicted is also
Negative. Ex: The case where a person NOT having cancer and the model classifying his case as Not
cancer comes under True Negatives.
3. False Positives (FP):
• False positives are the cases when the actual class of the data point was Negative and the predicted is
Positive. False is because the model has predicted incorrectly and positive because the class predicted was
a positive one. (Positive). Ex: A person NOT having cancer and the model classifying his case as cancer
comes under False Positives.
4. False Negatives (FN):
• False negatives are the cases when the actual class of the data point was Positive (True) and the predicted is
Negative. False is because the model has predicted incorrectly and negative because the class predicted was
a negative one. (Negative). Ex: A person having cancer and the model classifying his case as No-cancer
comes under False Negatives.
Machine Learning 101 5.5 Classification Accuracy Classification Accuracy is what we usually mean, when we use the term accuracy. If 𝑦ˆ𝑖 is the predicted value of the 𝑖𝑡ℎ sample and 𝑦𝑖 is the corresponding true value, then the fraction of correct predictions over 𝑛samples is defined as:
accuracy(𝑦, 𝑦ˆ) = 1(ˆ In short, it is the ratio of number of correct predictions to the total number of input samples:
Number of Correct Predictions TP + TN
accuracy = = (5.6)
Total Number of Predictions made TP+TN+FP+FN
1. Accuracy is a good measure when the target variable classes in the data are nearly balanced. Ex: 60% classes in
our fruits images data are apple and 40% are oranges. A model which predicts whether a new image is Apple or
an Orange, 97% of times correctly is a very good measure in this example.
2. Accuracy should NEVER be used as a measure when the target variable classes in the data are a majority of one
class. Ex: In a cancer detection example with 100 people, only 5 people has cancer. Let’s say our model is very
bad and predicts every case as No Cancer. In doing so, it has classified those 95 non-cancer patients correctly
and 5 cancerous patients as Non-cancerous. Now even though the model is terrible at predicting cancer, The
accuracy of such a bad model is also 95%.
5.6 Precision Using our cancer detection example, Precision is a measure that tells us what proportion of patients that we diagnosed as having cancer, actually had cancer. The predicted positives (People predicted as cancerous are TP and FP) and the people actually having a cancer are TP.
TP
TP+FP Ex: In our cancer example with 100 people, only 5 people have cancer. Let’s say our model is very bad and predicts every case as Cancer. Since we are predicting everyone as having cancer, our denominator(True positives and False Positives) is 100 and the numerator, person having cancer and the model predicting his case as cancer is 5. So in this example, we can say that Precision of such model is 5%.
5.7 Recall or Sensitivity Recall is a measure that tells us what proportion of patients that actually had cancer was diagnosed by the algorithm as having cancer. The actual positives (People having cancer are TP and FN) and the people diagnosed by the model having a cancer are TP. (Note: FN is included because the Person actually had a cancer even though the model predicted otherwise).
TP
TP+FN Ex: In our cancer example with 100 people, 5 people actually have cancer. Let’s say that the model predicts every case as cancer. So our denominator(True positives and False Negatives) is 5 and the numerator, person having cancer and the model predicting his case as cancer is also 5(Since we predicted 5 cancer cases correctly). So in this example, we can say that the Recall of such model is 100%. And Precision of such a model(As we saw above) is 5%.
1. Precision is about being precise. So even if we managed to capture only one cancer case, and we captured it
correctly, then we are 100% precise.
2. Recall is not so much about capturing cases correctly but more about capturing all cases that have “cancer” with
the answer as “cancer”. So if we simply always say every case as “cancer”, we have 100% recall.
Machine Learning 101 5.8 F1 Score F1 Score, also known as the Sørensen–Dice coefficient or Dice similarity coefficient(DSC), is the Harmonic Mean between precision and recall. The range for F1 Score is [0, 1]. It tells you how precise your classifier is (how many instances it classifies correctly), as well as how robust it is (it does not miss a significant number of instances).
High precision but lower recall, gives you an extremely accurate, but it then misses a large number of instances that are difficult to classify. The greater the F1 Score, the better is the performance of our model.
recall−1 + precision−1
(︂
precision · recall
𝐹1 = =2· (5.9)
Two other commonly used F measures are the 𝐹2 measure, which weighs recall higher than precision (by placing more emphasis on false negatives), and the 𝐹0.5 measure, which weighs recall lower than precision (by attenuating the influence of false negatives).
5.9 Receiver operating characteristic (ROC)
Receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot which illustrates the performance of a binary classifier system as its discrimination threshold is varied. It is created by plotting the fraction of true positives out of the positives (TPR = true positive rate) vs. the fraction of false positives out of the negatives (FPR = false positive rate), at various threshold settings. TPR is also known as sensitivity, and FPR is one minus the specificity or true negative rate.
5.9.1 True Positive Rate (Sensitivity)
True Positive Rate corresponds to the proportion of positive data points that are correctly considered as positive, with respect to all positive data points.
True Positive
True Positive + False Negative 5.9.2 False Positive Rate (Specificity)
False Positive Rate corresponds to the proportion of negative data points that are mistakenly considered as positive,
with respect to all negative data points.
False Positive
False Positive + True Negative This function requires the true binary value and the target scores, which can either be probability estimates of the positive class, confidence values, or binary decisions. False Positive Rate and True Positive Rate both have values in the range [0, 1]. FPR and TPR both are computed at threshold values such as (0.00, 0.02, 0.04, . . . ., 1.00) and a graph is drawn.
Lowering the classification threshold classifies more items as positive, thus increasing both False Positives and True Positives. To compute the points in an ROC curve, we could evaluate a logistic regression model many times with different classification thresholds.
Machine Learning 101 5.10 AUC: Area Under the ROC Curve Area under the ROC Curve, is the measure of an entire two-dimensional area underneath the entire ROC curve (think integral calculus) from (0,0) to (1,1). AUC provides an aggregate measure of performance across all possible classifi-
cation thresholds. AUC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUC of 0.0;
one whose predictions are 100% correct has an AUC of 1.0.
One way of interpreting AUC is as the probability that the model ranks a random positive example more highly than a random negative example. Ex: given the following examples, which are arranged from left to right in ascending order of logistic regression predictions; AUC represents the probability that a random positive (green) example is positioned to the right of a random negative (red).
5.10.1 Properties
1. AUC is scale-invariant. It measures how well predictions are ranked, rather than their absolute values.
2. AUC is classification-threshold-invariant. It measures the quality of the model’s predictions irrespective of
what classification threshold is chosen.
3. Scale invariance is not always desirable. For example, sometimes we really do need well calibrated probability
outputs, and AUC won’t tell us about that.
4. Classification-threshold invariance is not always desirable. In cases where there are wide disparities in the
cost of false negatives vs. false positives, it may be critical to minimize one type of classification error. AUC
isn’t a useful metric for this type of optimization.
Machine Learning 101 Citations
References
1. Metrics Evaluation
2. Metrics Performance 5.10. AUC: Area Under the ROC Curve 31
Machine Learning 101 32 Chapter 5. Model Metrics
CHAPTER 6
Underfitting and Overfitting In machine learning we describe the learning of the target function from training data as inductive learning. Induction refers to learning general concepts from specific examples which is exactly the problem that supervised machine learning problems aim to solve. This is different from deduction that is the other way around and seeks to learn specific concepts from general rules.
Generalization refers to how well the concepts learned by a machine learning model apply to specific examples not seen by the model when it was learning. The goal of a good machine learning model is to generalize well from the training data to any data from the problem domain. This allows us to make predictions in the future on data the model has never seen.
There is a terminology used in machine learning when we talk about how well a machine learning model learns and generalizes to new data, namely overfitting and underfitting. Overfitting and underfitting are the two biggest causes for poor performance of machine learning algorithms.
6.1 Overfitting Overfitting refers to a model that models the training data too well. Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model.
The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize.
Overfitting is more likely with nonparametric and nonlinear models that have more flexibility when learning a target function. As such, many nonparametric machine learning algorithms also include parameters or techniques to limit and constrain how much detail the model learns.
Overfitting occurs when a statistical model or machine learning algorithm captures the noise of the data. Intuitively,
overfitting occurs when the model or the algorithm fits the data too well. Specifically, overfitting occurs if the model or algorithm shows low bias but high variance. Overfitting is often a result of an excessively complicated model,
and it can be prevented by fitting multiple models and using validation or cross-validation to compare their predictive accuracies on test data.
Machine Learning 101 6.2 Underfitting Underfitting refers to a model that can neither model the training data nor generalize to new data. An underfit machine learning model is not a suitable model and will be obvious as it will have poor performance on the training data.
Underfitting is often not discussed as it is easy to detect given a good performance metric. The remedy is to move on and try alternate machine learning algorithms. Nevertheless, it does provide a good contrast to the problem of overfitting.
Underfitting occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data. Intuitively, underfitting occurs when the model or the algorithm does not fit the data well enough. Specifically,
underfitting occurs if the model or algorithm shows low variance but high bias. Underfitting is often a result of an excessively simple model.
6.3 Example This example demonstrates the problems of underfitting and overfitting and how we can use linear regression with polynomial features to approximate nonlinear functions. The plot shows the function that we want to approximate,
which is a part of the cosine function. In addition, the samples from the real function and the approximations of different models are displayed. The models have polynomial features of different degrees. We can see that a linear function (polynomial with degree 1) is not sufficient to fit the training samples. This is called underfitting. A polyno-
mial of degree 4 approximates the true function almost perfectly. However, for higher degrees the model will overfit the training data, i.e. it learns the noise of the training data. We evaluate quantitatively overfitting / underfitting by using cross-validation. We calculate the mean squared error (MSE) on the validation set, the higher, the less likely the model generalizes correctly from the training data.
Citations References
1. Underfitting vs Overfitting
CHAPTER 7
Model Performance When training ML Models, it is important to avoid overfitting the training data. Overfitting occurs when the ML model learns the noise in the training data and thus does not generalize well to data it has not been trained on. One hyperparameter that affects whether the ML model will overfit or not is the number of epochs or complete passes through the training split. If we use too many epochs, then the ML model is likely to overfit. On the other hand, if we use too few epochs, the ML model might not have the chance to learn fully from the training data.
Important terms
• Training data is the data given to the model, from which the model build your model.
• Training accuracy tells about how much your model learns to map the input and output.
• Validation data is the data with which the training process is validated.
• Validation accuracy tells about the generalizing power of the model.
Machine Learning 101
• Validation accuracy decreasing means lower generalization over the training data. Validation accuracy is eval-
uated while Training.
• Test data assess the performance of a trained model.
• Test accuracy gives the final generalization power. Test accuracy is evaluated after training.
• If training and validation are both low, you are probably under fitting and you can probably increase the capacity
of your model and train more or longer (increase the number of epochs).
7.1 Data Splitting In practice, detecting that our model is overfitting is difficult. It’s not uncommon that our trained model is already in production and then we start to realize that something is wrong. In fact, it is only by confronting new data that you can make sure that everything is working properly. However, during the training we should try to reproduce the real conditions as much as possible. For this reason, it is good practice to divide our dataset into three parts - training set,
dev set (also known as cross-validation or hold-out) and test set. Our model learns by seeing only the first of these parts. Hold-out is used to track our progress and draw conclusions to optimize the model. While, we use a test set at the end of the training process to evaluate the performance of our model.
It is very important to make sure that your cross-validation and test set come from the same distribution as well as that they accurately reflect data that we expect to receive in the future. Only then we can be sure that the decisions we make during the learning process bring us closer to a better solution. Our dev and test sets should be simply large enough to give us high confidence in the performance of our model.
7.2 Validation We need to create a model with the best settings (the degree), but we don’t want to have to keep going through training and testing. There are no consequences in our example from poor test performance, but in a real application where we might be performing a critical task such as diagnosing cancer, there would be serious downsides to deploying a faulty model. We need some sort of pre-test to use for model optimization and evaluate. This pre-test is known as a validation set. A basic approach would be to use a validation set in addition to the training and testing set. This presents a few problems though: we could just end up overfitting to the validation set and we would have less training data. A smarter implementation of the validation concept is k-fold cross-validation.
The idea is straightforward: rather than using a separate validation set, we split the training set into a number of subsets, called folds. Let’s use five folds as an example. We perform a series of train and evaluate cycles where each time we train on 4 of the folds and test on the 5th, called the hold-out set. We repeat this cycle 5 times, each time using a different fold for evaluation. At the end, we average the scores for each of the folds to determine the
Machine Learning 101 overall performance of a given model. This allows us to optimize the model before deployment without having to use additional data.
7.3 Cost function Whenever a model is trained on a training data and is used to predict values on a testing set, there exists a difference between the true and predicted values. The closer the predicted values to their corresponding real values, the better the model. That means, a cost function is used to measure how close the predicted values are to their corresponding real values. The function can be minimized or maximized, given the situation/problem.
For example, in case of ordinary least squares (OLS), the cost function(to be minimized) would be:
𝑚
J(𝜃) = (ℎ𝜃 (𝑥𝑖 ) − 𝑦𝑖 )2 (7.1)
where,
• 𝐽 denotes the cost function,
• 𝑚 is the number of observations in the dataset,
• ℎ(𝑥) is the predicted value of the response
• 𝑦 is the true value of the response 7.4 High Bias and High Variance If a model is under-performing (e.g. if the test or training error is too high), there are several ways to improve performance. To find out which of these many techniques is the right one for the situation, the first step is to determine the root of the problem.
The graph above plots the training error and the test error and can be divided into two overarching regimes. In the first regime (on the left side of the graph), training error is below the desired error threshold (denoted by ), but test error is significantly higher. In the second regime (on the right side of the graph), test error is remarkably close to training error, but both are above the desired tolerance of .
7.4.1 Regime 1 (High Variance)
In the first regime, the cause of the poor performance is high variance.
• Symptoms
1. Training error is much lower than test error
Machine Learning 101
2. Training error is lower than
3. Test error is above
• Remedies
1. Add more training data
2. Reduce model complexity – complex models are prone to high variance
3. Bagging (will be covered later in the course)
7.4.2 Regime 2 (High Bias)
Unlike the first regime, the second regime indicates high bias: the model being used is not robust enough to produce an accurate prediction.
• Symptoms
1. Training error is higher than
• Remedies
1. Use more complex model (e.g. kernelize, use non-linear models)
2. Add features
3. Boosting 7.5 Regularizations One of the first methods we should try when we need to reduce overfitting is regularization. It involves adding an extra element to the loss function, which punishes our model for being too complex or, in simple words, for using too high values in the weight matrix. This way we try to limit its flexibility, but also encourage it to build solutions based on multiple features. Two popular versions of this method are:
1. L1 regularization (Lasso Regression) : (Least Absolute Shrinkage and Selection Operator) adds absolute
value of magnitude of coefficient as penalty term to the loss function.
𝑛
∑︁
J(𝜃)𝐿1 = J(𝜃)𝑂𝐿𝑆 + 𝜆 |𝜃𝑗 | (7.2)
Machine Learning 101
1. L2 Regularization (Ridge regression) : adds squared magnitude of coefficient as penalty term to the loss
function.
𝑛
∑︁
J(𝜃)𝐿2 = J(𝜃)𝑂𝐿𝑆 + 𝜆 𝜃𝑗2 (7.3)
In addition to the cost function we had in case of OLS, there is an additional term added, which is the regularization term. 𝜃(norm of the coefficients), the addition is of 𝜆(the regularization parameter) and 𝜃2 (norm of coefficient squared). The addition of regularization term penalizes big coefficients and tries to minimize them to zero, although not making them exactly to zero. This means that if the ′ 𝑠 take on large values, the optimization function is penalized. We would prefer to take smaller ’s, or ’s that are close to zero to drive the penalty term small.
7.5.1 Key points
1. Lasso shrinks the less important feature’s coefficient to zero thus, removing some feature altogether. So, this
works well for feature selection in case we have a huge number of features.
2. Built-in feature selection is frequently mentioned as a useful property of the L1-norm, which the L2-norm does
not. This is actually a result of the L1-norm, which tends to produces sparse coefficients.
3. Computational efficiency, L1-norm does not have an analytical solution, but L2-norm does. This allows the
L2-norm solutions to be calculated computationally efficiently. However, L1-norm solutions does have the
sparsity properties which allows it to be used along with sparse algorithms, which makes the calculation more
computationally efficient.
4. When there are many predictors (with some col-linearity among them) in the dataset and not all of them have the
same predicting power, L2 regression can be used to estimate the predictor importance and penalize predictors
that are not important. One issue with co-linearity is that the variance of the parameter estimate is huge. In
cases where the number of features are greater than the number of observations, the matrix used in the OLS may
not be invertible but Ridge Regression enables this matrix to be inverted. It seeks to reduce the MSE by adding
some bias and, at the same time, reducing the variance. Remember high variance correlates to a over-fitting
model.
5. One of the things that Ridge can’t be used is variable selection since it retains all the predictors. Lasso on the
other hand overcomes this problem by forcing some of the predictors to zero.
6. As the 𝜆 is increased, variance is reduced and bias is added in the model, so getting the right value of the lambda
is essential. Cross-validation is generally used to estimate the value of lambda.
Machine Learning 101 7.6 Early Stopping When you’re training a learning algorithm iteratively, you can measure how well each iteration of the model performs.
Up until a certain number of iterations, new iterations improve the model. After that point, however, the model’s ability to generalize can weaken as it begins to overfit the training data. Early stopping refers stopping the training process before the learner passes that point.
Today, this technique is mostly used in deep learning while other techniques (e.g. regularization) are preferred for classical machine learning.
7.7 Hyperparameter Optimization In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparame-
ters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are learned.
The same kind of machine learning model can require different constraints, weights or learning rates to generalize different data patterns. These measures are called hyperparameters, and have to be tuned so that the model can optimally solve the machine learning problem. Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given independent data.[1] The objective function takes a tuple of hyperparameters and returns the associated loss. Cross-validation is often used to estimate this generalization performance.
7.7.1 Grid search The traditional way of performing hyperparameter optimization has been grid search, or a parameter sweep, which is simply an exhaustive searching through a manually specified subset of the hyperparameter space of a learning algo-
rithm. A grid search algorithm must be guided by some performance metric, typically measured by cross-validation on the training set or evaluation on a held-out validation set.
Machine Learning 101 7.7.2 Random search Random Search replaces the exhaustive enumeration of all combinations by selecting them randomly. This can be simply applied to the discrete setting described above, but also generalizes to continuous and mixed spaces. It can outperform Grid search, especially when only a small number of hyperparameters affects the final performance of the machine learning algorithm 7.7.3 Bayesian optimization Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization, aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum. It tries to balance exploration (hyperparameters for which the outcome is most uncertain) and exploitation (hyperparameters expected close to the optimum).
7.7.4 Gradient-based optimization For specific learning algorithms, it is possible to compute the gradient with respect to hyperparameters and then optimize the hyperparameters using gradient descent. The first usage of these techniques was focused on neural networks. Since then, these methods have been extended to other models such as support vector machines or logistic regression.
Citations References
1. Model Performance
2. Model Evaluation
Machine Learning 101 42 Chapter 7. Model Performance
CHAPTER 8
Gradient descent Gradient Descent is a method used while training a machine learning model. It is an optimization algorithm, based on a convex function, that tweaks it’s parameters iteratively to minimize a given function to its local minimum. It is simply used to find the values of a functions parameters (coefficients) that minimize a cost function as far as possible.
You start by defining the initial parameters values and from there on Gradient Descent iteratively adjusts the values,
using calculus, so that they minimize the given cost-function 8.1 Gradient A gradient measures how much the output of a function changes if you change the inputs a little bit. It simply measures the change in all weights with regard to the change in error. You can also think of a gradient as the slope of a function.
The higher the gradient, the steeper the slope and the faster a model can learn. But if the slope is zero, the model stops learning. Said it more mathematically, a gradient is a partial derivative with respect to its inputs.
8.2 Cost Function It is a way to determine how well the machine learning model has performed given the different values of each parameters. The linear regression model, the parameters will be the two coefficients, 𝛽 and 𝑚 :
The cost function will be the sum of least square methods. Since the cost function is a function of the parameters 𝛽 and 𝑚, we can plot out the cost function with each value of the coefficients. (i.e. Given the value of each coefficient,
we can refer to the cost function to know how well the machine learning model has performed). The cost function looks like:
1. during the training phase, we are focused on selecting the ‘best’ value for the parameters (i.e. the coefficients),
𝑥’s will remain the same throughout the training phase
2. for the case of linear regression, we are finding the value of the coefficients that will reduce the cost to the
minimum a.k.a the lowest point in the mountainous region.
Machine Learning 101 8.3 Method The Cost function will take in a (𝑚, 𝑏) pair and return an error value based on how well the line fits our data. To compute this error for a given line, we’ll iterate through each (𝑥, 𝑦) point in our data set and sum the square distances between each point’s 𝑦 value and the candidate line’s 𝑦 value (computed at mx + b). It’s conventional to square this distance to ensure that it is positive and to make our error function differentiable.
• Lines that fit our data better (where better is defined by our error function) will result in lower error values. If we
minimize this function, we will get the best line for our data. Since our error function consists of two parameters
(m and b) we can visualize it as a two-dimensional surface.
• Each point in this two-dimensional space represents a line. The height of the function at each point is the error
value for that line. Some lines yield smaller error values than others (i.e., fit our data better). When we run
gradient descent search, we will start from some location on this surface and move downhill to find the line with
the lowest error.
• The horizontal axes represent the parameters (𝑤 and 𝛽) and the cost function 𝐽(𝑤, 𝛽) is represented on the
vertical axes. You can also see in the image that gradient descent is a convex function.
• we want to find the values of 𝑤 and 𝛽 that correspond to the minimum of the cost function (marked with the red
arrow). To start with finding the right values we initialize the values of 𝑤 and 𝛽 with some random numbers and
Gradient Descent then starts at that point.
Machine Learning 101
• Then it takes one step after another in the steepest downside direction till it reaches the point where the cost
function is as small as possible.
8.4 Algorithm Moving forward to find the lowest error (deepest valley) in the cost function (with respect to one weight) we need to tweak the parameters of the model. Using calculus, we know that the slope of a function is the derivative of the function with respect to a value. This slope always points to the nearest valley.
𝑁
𝐸𝑟𝑟𝑜𝑟(𝑚,𝛽) = (𝑦𝑖 − (𝑚𝑥𝑖 + 𝛽))2 (8.1)
We can see the graph of the cost function(named 𝐸𝑟𝑟𝑜𝑟 with symbol 𝐽) against just one weight. Now if we calculate
𝑑𝐽 the slope(let’s call this ( 𝑑𝑤 ) of the cost function with respect to this one weight, we get the direction we need to move towards, in order to reach the local minima(nearest deepest valley).
Note: The gradient (or derivative) tells us the incline or slope of the cost function. Hence, to minimize the cost function, we move in the direction opposite to the gradient.
Machine Learning 101
8.4.1 Steps
1. Initialize the weights 𝑤 randomly.
2. Calculate the gradients 𝐺 of cost function w.r.t parameters. This is done using partial differentiation: 𝐺 =
𝐽(𝑤)/𝑤. The value of the gradient 𝐺 depends on the inputs, the current values of the model parameters, and
the cost function.
3. Update the weights by an amount proportional to 𝐺, i.e. 𝑤 = 𝑤 − 𝐺
4. Repeat until the cost 𝐽(𝑤) stops reducing, or some other pre-defined termination criteria is met.
Listing 1: gradient descent / batch gradient descent
1 def calculate_cost(theta, x, y):
2 """
3 calculates the cost for the given X and y
4 """
5 m = len(y)
6 predictions = X.dot(theta)
7 cost = np.sum(np.square(predictions-y))/(2*m)
8 return cost
9 10 def gradient_descent(X, y, theta, learning_rate=0.01, iterations=1000):
11 """
12 returns the final theta vector and the array of the cost history 13 """
14 m = len(y)
15 cost_history = np.zeros(iterations)
16 theta_history = np.zeros((iterations,2))
17 for it in range(iterations):
18 prediction = np.dot(X, theta)
19 theta -= (1/m)*learning_rate*(X.T.dot((prediction - y)))
20 theta_history[it,:] = theta.T 21 cost_history[it] = calculate_cost(theta, X, y)
22 return theta, cost_history, theta_history
Important: In step 3, 𝜂 is the learning rate which determines the size of the steps we take to reach a minimum. We
need to be very careful about this parameter. High values of 𝜂 may overshoot the minimum, and very low values will
reach the minimum very slowly.
8.5 Learning Rate
How big the steps are that Gradient Descent takes into the direction of the local minimum are determined by the so-
called learning rate. It determines how fast or slow we will move towards the optimal weights. In order for Gradient
Descent to reach the local minimum, we have to set the learning rate to an appropriate value, which is neither too low
nor too high.
This is because if the steps it takes are too big, it maybe will not reach the local minimum because it just bounces back
and forth between the convex function of gradient descent like you can see on the left side of the image below. If you
set the learning rate to a very small value, gradient descent will eventually reach the local minimum but it will maybe
take too much time like you can see on the right side of the image.
Note: When you’re starting out with gradient descent on a given problem, just simply try 0.001, 0.003, 0.01, 0.03,
Machine Learning 101 0.1, 0.3, 1 etc. as it’s learning rates and look which one performs the best.
8.6 Convergence Once the agent, after many steps, realize the cost does not improve by a lot and it is stuck very near a particular point
(minima), technically this is known as convergence. The value of the parameters at that very last step is known as the
‘best’ set of parameters, and we have a trained model.
8.7 Types of Gradient Descent Three popular types of Gradient Descent, that mainly differ in the amount of data they use.
8.7.1 Batch Gradient Descent Also called vanilla gradient descent, calculates the error for each example within the training dataset, but only after all training examples have been evaluated, the model gets updated. This whole process is like a cycle and called a training epoch.
1. Advantages of it are that it’s computational efficient, it produces a stable error gradient and a stable convergence.
Machine Learning 101
2. Batch Gradient Descent has the disadvantage that the stable error gradient can sometimes result in a state of
convergence that isn’t the best the model can achieve. It also requires that the entire training dataset is in
memory and available to the algorithm.
8.7.2 Stochastic gradient descent (SGD)
In vanilla gradient descent algorithms, we calculate the gradients on each observation one by one; In stochastic gradient
descent we can chose the random observations randomly. It is called stochastic because samples are selected randomly
(or shuffled) instead of as a single group (as in standard gradient descent) or in the order they appear in the training
set. This means that it updates the parameters for each training example, one by one. This can make SGD faster than
Batch Gradient Descent, depending on the problem.
1. One advantage is that the frequent updates allow us to have a pretty detailed rate of improvement. The frequent
updates are more computationally expensive as the approach of Batch Gradient Descent.
2. The frequency of those updates can also result in noisy gradients, which may cause the error rate to jump around,
instead of slowly decreasing.
Listing 2: Stochastic gradient descent
1 def stocashtic_gradient_descent(X, y, theta, learning_rate=0.01, iterations=100):
2 """
3 returns the final theta vector and the array of the cost history
4 """
5 m = len(y)
6 cost_history = np.zeros(iterations)
7
8 for it in range(iterations):
9 cost = 0.0 10 for i in range(m):
11 rand_ind = np.random.randint(0,m)
12 X_i = X[rand_ind,:].reshape(1, X.shape[1])
13 y_i = y[rand_ind].reshape(1,1)
14 prediction = np.dot(X_i, theta)
15 16 theta -= (1/m)*learning_rate*(X_i.T.dot((prediction - y_i)))
17 cost += calculate_cost(theta, X_i, y_i)
18 cost_history[it] = cost 19
20 return theta, cost_history, theta_history
8.7.3 Mini-batch Gradient Descent
Is a combination of the concepts of SGD and Batch Gradient Descent. It simply splits the training dataset into small
batches and performs an update for each of these batches. Therefore it creates a balance between the robustness of
stochastic gradient descent and the efficiency of batch gradient descent.
1. Common mini-batch sizes range between 50 and 256, but like for any other machine learning techniques, there
is no clear rule, because they can vary for different applications. It is the most common type of gradient descent
within deep learning.
Machine Learning 101
Listing 3: mini-batch gradient descent
1 def stocashtic_gradient_descent(X, y, theta, learning_rate=0.01, iterations=100,
˓→batch_size=20):
2 """
3 returns the final theta vector and the array of the cost history
4 """
5 m = len(y)
6 cost_history = np.zeros(iterations)
7 n_batches = int(m/batch_size)
8
9 for it in range(iterations):
10 cost = 0.0 11 indices = np.random.permumtation(m)
12 X = X[indices]
13 y = y[indices]
14 for i in range(0, m, batch_size):
15 X_i = X[i:i+batch_size]
16 y_i = y[i:i+batch_size]
17 X_i = np.c_[np.ones(len(X_i)), X_i]
18 prediction = np.dot(X_i, theta)
19 20 theta -= (1/m)*learning_rate*(X_i.T.dot((prediction - y_i)))
21 cost += calculate_cost(theta, X_i, y_i)
22 cost_history[it] = cost 23
24 return theta, cost_history, theta_history
Citations
References
1. Gradient Descent
2. Gradient Descent Linear Regression
Machine Learning 101 50 Chapter 8. Gradient descent
CHAPTER 9
Regression Regression analysis is a form of predictive modelling technique which investigates the relationship between a depen-
dent variable(s) (target) and independent variable(s) (predictor). This technique is used for forecasting, time series modelling and finding the causal effect relationship between the variables. Regression analysis is an important tool for modelling and analyzing data. Here, we fit a curve / line to the data points, in such a manner that the differences between the distances of data points from the curve or line is minimized.
Important: One important fact to note, and one that is often brushed over or forgotten, is that statistical analysis,
regression analysis included, can only ever indicate correlations between factors, not causal relationships. Regression analysis is a great technique for making predictions and understanding the influences of variables on one another, but are sometimes misused, or misunderstood, and taken to be a reliable proof of causality, this is misleading. While some of the variables included in a regression model may very well be causally related to one another, they also might not be; without empirical testing, these relationships cannot be taken as absolute.
Machine Learning 101 9.1 Basic Models There are various kinds of regression techniques available to make predictions. These techniques are mostly driven by three metrics (number of independent variables, type of dependent variables and shape of regression line).
9.1.1 Continuous variables Continuous variables are a measurement on a continuous scale, such as weight, time, and length.
Linear regression Linear regression, also known as ordinary least squares (OLS) and linear least squares, is the real workhorse of the regression world. Use linear regression to understand the mean change in a dependent variable given a one-unit change in each independent variable.
1. There must be linear relationship between independent and dependent variables
2. Multiple regression suffers from multicollinearity, autocorrelation, heteroskedasticity.
3. Linear Regression is very sensitive to Outliers. It can terribly affect the regression line and eventually the
forecasted values.
4. Multicollinearity can increase the variance of the coefficient estimates and make the estimates very sensitive to
minor changes in the model. The result is that the coefficient estimates are unstable
5. In case of multiple independent variables, we can go with forward selection, backward elimination and step wise
approach for selection of most significant independent variables.
Polynomial Regression When we want to create a model that is suitable for handling non-linearly separable data, we will need to use a polynomial regression. In this regression technique, the best fit line is not a straight line. It is rather a curve that fits into the data points.
1. Able to model non-linearly separable data; linear regression can’t do this. It is much more flexible in general
and can model some fairly complex relationships.
2. Full control over the modelling of feature variables (which exponent to set).
3. Requires careful design. Need some knowledge of the data in order to select the best exponents.
4. Prone to over fitting if exponents are poorly selected.
Machine Learning 101 Ridge regression Ridge Regression is a technique used when the data suffers from multicollinearity ( independent variables are highly correlated). In multicollinearity, even though the least squares estimates (OLS) are unbiased, their variances are large which deviates the observed value far from the true value. By adding a degree of bias to the regression estimates, ridge regression reduces the standard errors.
1. It allows you to analyze data even when severe multicollinearity is present and helps prevent overfitting. This
type of model reduces the large, problematic variance that multicollinearity causes by introducing a slight bias
in the estimates.
2. The assumptions of this regression is same as least squared regression except normality is not to be assumed
3. It shrinks the value of coefficients but doesn’t reaches zero, which suggests no feature selection feature
4. This is a regularization method and uses l2 regularization.
5. The procedure trades away much of the variance in exchange for a little bias, which produces more useful
coefficient estimates when multicollinearity is present.
Lasso regression Lasso regression (least absolute shrinkage and selection operator) performs variable selection that aims to increase prediction accuracy by identifying a simpler model. It is similar to Ridge regression but with variable selection.
Similar to Ridge Regression, Lasso (Least Absolute Shrinkage and Selection Operator) also penalizes the absolute size of the regression coefficients. In addition, it is capable of reducing the variability and improving the accuracy of linear regression models. Lasso regression differs from ridge regression in a way that it uses absolute values in the penalty function, instead of squares. This leads to penalizing (or equivalently constraining the sum of the absolute values of the estimates) values which causes some of the parameter estimates to turn out exactly zero. Larger the penalty applied, further the estimates get shrunk towards absolute zero. This results to variable selection out of given n variables.
1. The assumptions of this regression is same as least squared regression except normality is not to be assumed
2. It shrinks coefficients to zero (exactly zero), which certainly helps in feature selection
3. This is a regularization method and uses l1 regularization
4. If group of predictors are highly correlated, lasso picks only one of them and shrinks the others to zero ElasticNet Regression ElasticNet is hybrid of Lasso and Ridge Regression techniques. It is trained with L1 and L2 prior as regularizer.
Elastic-net is useful when there are multiple features which are correlated. Lasso is likely to pick one of these at random, while elastic-net is likely to pick both. A practical advantage of trading-off between Lasso and Ridge is that,
it allows Elastic-Net to inherit some of Ridge’s stability under rotation.
1. It encourages group effect in case of highly correlated variables
2. There are no limitations on the number of selected variables
3. It can suffer with double shrinkage 9.1.2 Categorical variables A categorical variable has values that you can put into a countable number of distinct groups based on a characteristic.
Machine Learning 101 Binary Logistic Regression Use binary logistic regression to understand how changes in the independent variables are associated with changes in the probability of an event occurring. This type of model requires a binary dependent variable. A binary variable has only two possible values, such as pass and fail.
Ordinal Logistic Regression Ordinal logistic regression models the relationship between a set of predictors and an ordinal response variable. An ordinal response has at least three groups which have a natural order, such as hot, medium, and cold.
Nominal Logistic Regression Nominal logistic regression models the relationship between a set of independent variables and a nominal dependent variable. A nominal variable has at least three groups which do not have a natural order, such as scratch, dent, and tear.
Poisson regression Use Poisson regression to model how changes in the independent variables are associated with changes in the counts.
Poisson models are similar to logistic models because they use Maximum Likelihood Estimation and transform the dependent variable using the natural log. Poisson models can be suitable for rate data, where the rate is a count of events divided by a measure of that unit’s exposure (a consistent unit of observation).
9.2 Selecting Model Within multiple types of regression models, it is important to choose the best suited technique based on type of independent and dependent variable , dimensionality in the data and other essential characteristics of the data.
1. Data exploration is an inevitable part of building predictive model. It should be you first step before selecting
the right model like identify the relationship and impact of variables
2. To compare the goodness of fit for different models, we can analyse different metrics like statistical significance
of parameters, R-square, Adjusted r-square, AIC, BIC and error term. Another one is the Mallow’s Cp criterion.
This essentially checks for possible bias in your model, by comparing the model with all possible submodels (or
a careful selection of them).
3. Cross-validation is the best way to evaluate models used for prediction. Here you divide your data set into two
group (train and validate). A simple mean squared difference between the observed and predicted values give
you a measure for the prediction accuracy.
4. If your data set has multiple confounding variables, you should not choose automatic model selection method
because you do not want to put these in a model at the same time.
5. It’ll also depend on your objective. It can occur that a less powerful model is easy to implement as compared to
a highly statistically significant model.
6. Regression regularization methods(Lasso, Ridge and ElasticNet) works well in case of high dimensionality and
multicollinearity among the variables in the data set.
Machine Learning 101 Citations
References
1. Regression Guide
2. Regression Analysis
3. Regression Types 9.2. Selecting Model 55
Machine Learning 101 56 Chapter 9. Regression
CHAPTER 10
Simple Linear Regression Linear regression is an approach for predicting a quantitative response using feature predictor or input variable. It is a basic predictive analytics technique that uses historical data to predict an output variable. It takes the following form:
where,
• 𝑦 is the response, a function of x
• 𝑥 is the feature
• 𝑏𝑒𝑡𝑎0 is the intercept, also called bias coefficient
• 𝑏𝑒𝑡𝑎1 is the slope, also called the scale factor
• 𝑏𝑒𝑡𝑎0 and 𝛽1 are called the model coefficients Our goal is to find statistically significant values of the parameters 𝛽0 and 𝛽1 that minimize the difference between 𝑦 (
output ) and 𝑦𝑒 ( estimated / predicted output ).
Note: The predicted values here are continuous in nature. So, your ultimate goal is, given a training set, to learn a function 𝑓 : 𝑋ß𝑌 so that 𝑓 (𝑥) is a good predictor for the corresponding value of 𝑦. Also, keep in mind that the domain of values both 𝑥 and 𝑦 are all real numbers If we are able to determine the optimum values of these two parameters, then we will have the Line of Best Fit that we can use to predict the values of 𝑦, given the value of 𝑥.
It is important to note that, linear regression can often be divided into two basic forms:
1. Simple Linear Regression (SLR) which deals with just two variables (the one you saw at first)
2. Multi-linear Regression (MLR) which deals with more than two variables So, how do we estimate 𝛽0 and 𝛽1 ? We can use a method called ordinary least squares.
Machine Learning 101 10.1 Ordinary Least Sqaure 10.1.1 Method Ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being predicted) in the given dataset and those predicted by the linear function.
This is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface – the smaller the differences, the better the model fits the data.
The least squares estimates in this case are given by simple formulas:
∑︀ ∑︀ ∑︀
𝑥𝑖 𝑦𝑖
𝛽1 =
∑︀ ∑︀
∑︀𝑛
(𝑥 − 𝑥 ¯)(𝑦𝑖 − 𝑦¯)
= 𝑖=1 ∑︀𝑛 𝑖
Cov[𝑥, 𝑦]
=
Var[𝑥]
𝛽0 = 𝑦¯ − 𝛽1 𝑥 ¯
where,
1. Var(.) and Cov(.) are called sample parameters.
2. 𝛽1 is the slope
3. 𝛽0 is the intercept 10.1.2 Evaluation There are many methods to evaluate models. We will use Root Mean Squared Error and Coefficient of Determination
Machine Learning 101
1. Root Mean Squared Error is the square root of sum of all errors divided by number of values:
⎯
⎸ 𝑛
𝑅𝑀 𝑆𝐸 = ⎷ (𝑦ˆ𝑖 − 𝑦𝑖 )2 (10.2)
𝑛
2. R^2 Error score usually range from 0 to 1. It will also become negative if the model is completely wrong:
Total Sum of Squares
R2 = 1 −
Total Sum of Square of Residuals
= 1 − ∑︀𝑖=1𝑛 Note: We want to minimize the error of our model. A good model will always have least error. We can find this line by reducing the error. The error of each point is the distance between line and that point as illustrated as above Citations
References
1. Linear Regression
2. Ordinary Least Square
Machine Learning 101 60 Chapter 10. Simple Linear Regression
CHAPTER 11
Example Simple Linear Regression
Different methods used to demonstrate Simple Linear Regression
• Ordinary Least Square
– Python from scratch
– Scikit
• Gradient Descent
– Python from scratch
– Scikit
11.1 Ordinary Least Sqaure
Creating sample data :
5 np.random.seed(0)
6 X = 2.5 * np.random.randn(100) + 1.5
7 res = 0.5 * np.random.randn(100)
8 y = 2 + 0.3 * X + res
11.1.1 Python
Calculating model coefficiants 𝛽0 and 𝛽1 :
17 numer = 0 18 denom = 0 19 for i in range(100):
20 numer += (X[i] - mean_x) * (y[i] - mean_y)
21 denom += (X[i] - mean_x) ** 2 22
(continues on next page)
Machine Learning 101
(continued from previous page)
23 b1 = numer / denom 24 b0 = mean_y - (b1 * mean_x)
intercept: 2.0031670124623426
slope: 0.32293968670927636
Making predictions :
29 ypred = b0 + b1 * X
Plotting the regression line :
32 plt.figure(figsize=(12, 6))
33 plt.plot(X, ypred, color='red', label='regression line') # regression line 34 plt.scatter(X, y, c='green', label='actual values') # scatter plot showing actual
˓→data
Calculate the root mean squared error and r 2 error :
37 plt.xlabel('X')
38 plt.ylabel('y')
39 plt.legend()
40 plt.savefig("figure_1.png")
41 42 # calculate the error 43 # root mean squared 44 rmse = 0 45 for i in range(100):
46 y_pred = b0 + b1 * X[i]
47 rmse += (y[i] - y_pred) ** 2 48 rmse = np.sqrt(rmse/100)
49 print("rmse: ", rmse)
50 51 # r-squared error 52 ss_t = 0 53 ss_r = 0
Machine Learning 101
rmse: 0.5140943138643506
r2: 0.7147163547202338
11.1.2 Scikit
Reshape array, scikit cannot use array with rank 1 :
67 X = X.reshape((100, 1))
Initialize model and fit data :
70 reg = LinearRegression()
71 72 # Fitting training data 73 reg = reg.fit(X, y)
intercept 2.003167012462343
slope: [0.32293969]
Making predictions :
79 ypred = reg.predict(X)
Plotting the regression line :
82 plt.clf()
83 plt.plot(X, ypred, color='blue', label='scikit model')
84 plt.scatter(X, y, c='green', label='actual values')
Calculate the root mean squared error and r 2 error :
92 # Calculating RMSE and R2 Score 93 mse = mean_squared_error(y, ypred)
94 rmse = np.sqrt(mse)
95 r2_score = reg.score(X, y)
96 print("rmse : ", rmse)
11.1. Ordinary Least Sqaure 63
Machine Learning 101
rmse : 0.5140943138643504
r2 : 0.7147163547202338
11.2 Gradient Descent
Creating sample data :
16 np.random.seed(33)
17 X = 2.5 * np.random.randn(100) + 1.5 18 res = 0.5 * np.random.randn(100)
19 Y = 2 + 0.3 * X + res
11.2.1 Python
Perform batch gradeint decent with iteration=5000 and learning_rate=0.001 :
35 for i in range(epochs):
36 # The current predicted value of Y 37 Y_pred = slope*X + intercpt 38
39 # derivative wrt to slope 40 D_m = (-2/n) * sum(X * (Y - Y_pred))
41 42 # derivative wrt intercept 43 D_c = (-2/n) * sum(Y - Y_pred)
44 45 slope = slope - alpha * D_m # Update m 46 intercpt = intercpt - alpha * D_c # Update c 47
48 # reset for next run 49 D_m = 0 50 D_c = 0 51
52 predictions.append(Y_pred)
53 intercpts.append(intercpt)
54 slopes.append(slope)
intercept 1.8766343225257833
slope 0.33613290217142805
Calculating the Errors :
69 # root mean squared 70 rmse = 0 71 for i in range(100):
72 y_pred = intercpt + slope * X[i]
73 rmse += (Y[i] - y_pred) ** 2 74 rmse = np.sqrt(rmse/100)
75 # rmse = np.sqrt ( sum ( np.square ( Y - predictions[-1] ) ) ) / 100 76 print("rmse: ", rmse)
77 78
79 # r-squared error
(continues on next page)
Machine Learning 101
(continued from previous page)
80 ss_t = 0
81 ss_r = 0
82 for i in range(100):
83 y_pred = intercpt + slope * X[i]
84 ss_t += (Y[i] - np.mean(Y)) ** 2
85 ss_r += (Y[i] - y_pred) ** 2
86 r2 = 1 - (ss_r/ss_t)
87 # r2 = 1 - ( sum ( np.square ( Y - np.mean(Y) ) ) / sum ( np.square ( Y -
˓→predictions[-1] ) ) )
88 print("r2 : ", r2)
rmse: 0.5135508150299471
r2 : 0.7398913279943503
Plot using matplotlib.animation :
56 ax.scatter(X, Y, label='data points')
57 line, = ax.plot([], [], 'r-', label='regression line')
58 plt.xlabel('X')
59 plt.ylabel('y')
60 plt.title('python gradient decent')
61 plt.legend()
62 anim = FuncAnimation(fig, update, frames=np.arange(0, 5000, 50), interval=120,
˓→repeat=False)
63 anim.save('figure_03.gif', fps=60, writer='imagemagic' )
11.2.2 Scikit
Initialize model, default max_iteration=1000 :
100 clf = SGDRegressor(alpha=alpha)
Start training loop. SGDRegressor.partial_fit is used as it sets max_iterations=1 of the model instance
as we are already executing it in a loop. At the moment there is no callback method implemented in scikit to retrieve
parameters of the training instance , therefor calling the model using partial_fit in a for-loop is used :
101 for counter in range (0,epochs):
102 103 # partial fit set max_iter=1 internally 104 clf.partial_fit(X, Y)
105 predictions.append(clf.predict(X))
106 intercpts.append(clf.intercept_)
107 slopes.append(clf.coef_)
intercept 1.877731096750561
slope 0.3349640669632063
Note: Alternatively you could use clf = SGDRegression(max_iter=epochs, alpha=0.001,
verbose=1) if you dont want to get the model parameters during the traing process. verbose=1 enables stdout
of the training process.
Machine Learning 101
Calculate the Errors :
112 # Calculating RMSE and R2 Score 113 mse = mean_squared_error(Y, clf.predict(X))
114 rmse = np.sqrt(mse)
115 r2_score = clf.score(X, Y)
116 print("rmse : ", rmse)
117 print("r2 : ", r2_score)
rmse : 0.5135580867991415
r2 : 0.7398839617763866
Plot the training loop using matplotlib.animation :
122 ax.scatter(X, Y, label='data points')
123 line, = ax.plot([], [], 'r-', label='regression line')
124 plt.xlabel('X')
125 plt.ylabel('y')
126 plt.title('scikit gradient decent')
127 plt.legend()
128 anim = FuncAnimation(fig, update, frames=np.arange(0, epochs, 50), interval=120,
˓→repeat=False)
129 anim.save('figure_04.gif', fps=60, writer='imagemagic' )
Important: scikits SGDRegressor converges faster than our implementation.
Citations
References
CHAPTER 12
Multiple Linear Regression Multiple linear regression (MLR), also known simply as multiple regression, is a statistical technique that uses several explanatory variables to predict the outcome of a response variable. The goal of multiple linear regression (MLR) is to model the linear relationship between the explanatory (independent) variables and response (dependent) variable.
Multiple regression is the extension of ordinary least-squares (OLS) regression that involves more than one explanatory variable.
𝑦𝑖 = 𝛽0 + 𝛽1 * 𝑥1 + 𝛽2 * 𝑥2 + ... + 𝛽𝑛 * 𝑥𝑛 (12.1)
where
• 𝑦𝑖 = dependent variable
• 𝑥𝑖 = explanatory variable
• 𝛽0 = intercept
• 𝛽1 = slope
• 𝜖 = model’s error term, also called residual A simple linear regression is a function that allows to make predictions about one variable based on the information that is known about another variable. Linear regression can only be used when one has two continuous variables —
an independent variable and a dependent variable. The independent variable is the parameter that is used to calculate the dependent variable or outcome. A multiple regression model extends to several explanatory variables.
The multiple regression model is based on the following assumptions:
• There is a linear relationship between the dependent variables and the independent variables.
• The independent variables are not too highly correlated with each other.
• 𝑦𝑖 observations are selected independently and randomly from the population.
• Residuals should be normally distributed with a mean of 0 and variance 𝜎
Machine Learning 101 12.1 Least Squared Residual 12.1.1 Method A general multiple-regression model can be written as
𝑦𝑖 = 𝛽0 * 1 + 𝛽1 * 𝑥𝑖1 + 𝛽2 * 𝑥𝑖2 + 𝛽𝑘 * 𝑥𝑖𝑘 + 𝜖𝑖 (12.2)
In matrix form, we can rewrite this model as :
⎡ ⎤ ⎡ ⎤
𝑦1 1 𝑥11 𝑥12 ··· 𝑥1𝑘
⎢ 𝑦2 ⎥ ⎢1 𝑥21 𝑥22 ··· 𝑥2𝑘 ⎥
⎢ .. ⎥ = ⎢
⎢ ⎥ ⎢ ⎥
.. ⎥
⎣.⎦ ⎣ . ⎦
𝑦𝑛 1 𝑥𝑛1 𝑥𝑛2 ··· 𝑥𝑛𝑘
⎡ ⎤
𝛽0 ⎡ ⎤
⎢ 𝛽1 ⎥ 𝜖1
⎢ ⎥ ⎢ 𝜖2 ⎥
* ⎢ 𝛽2 ⎥ + ⎢ . ⎥
⎢ ⎥ ⎢ ⎥
⎢ .. ⎥ ⎣ .. ⎦
⎣ . ⎦
𝜖𝑛
𝛽𝑘 The strategy in the least squared residual approach (ordinary least square) is the same as in the bivariate linear regres-
sion model. The idea of the∑︀ordinary least squares estimator (OLS) consists in choosing 𝛽𝑖 in such a way that, the sum
𝑁 of squared residual (i.e. 𝜖
𝑖=1 𝑖 ) in the sample is as small as possible. Mathematically this means that in order to
∑︀𝑁 estimate the 𝛽 we have to minimize 𝑖=1 𝜖𝑖 which in matrix notation is nothing else than 𝑒′ 𝑒.
⎡ ⎤
𝑒′ 𝑒 = 𝑒1 𝑒2 · · · 𝑒𝑁 ⎢ . ⎥ = 𝑒2𝑖
[︀ ]︀ ⎢ ⎥
𝑒𝑁 Consequently we can write 𝑒′ 𝑒 as (𝑌 − 𝑋𝛽)′ (𝑌 − 𝑋𝛽) by simply plugging in the expression 𝑒 = 𝑌 − 𝑋𝛽 into 𝑒′ 𝑒.
This leaves us with the following minimization problem:
𝑚𝑖𝑛𝛽 𝑒′ 𝑒 = (𝑌 − 𝑋𝛽)′ (𝑌 − 𝑋𝛽)
= (𝑌 ′ − 𝛽 ′ 𝑋 ′ )(𝑌 − 𝑋𝛽)
= 𝑌 ′ 𝑌 − 𝛽 ′ 𝑋 ′ 𝑌 − 𝑌 ′ 𝑋𝛽 + 𝛽 ′ 𝑋 ′ 𝑋𝛽 Note: It is important to understand that 𝛽 ′ 𝑋 ′ 𝑌 = (𝛽 ′ 𝑋 ′ 𝑌 )′ = 𝑌 ′ 𝑋𝛽. As both terms are are scalars, meaning of dimension 1×1, the transposition of the term is the same term.
In order to minimize the expression in (4), we have to differentiate the expression with respect to 𝛽 and set the derivative equal zero.
𝜕(𝑒′ 𝑒)
𝜕𝑏
!
𝑋 ′ 𝑋𝛽 = 𝑋 ′ 𝑌 Note: the second order condition for a minimum requires that the matrix 𝑋 ′ 𝑋 is positive definite. This requirement is fulfilled in case 𝑋 has full rank.
Machine Learning 101 Intercept
You can obtain the solution for the intercept by setting the partial derivative of the squared loss with respect to the intercept 𝛽0 to zero. Let 𝛽 ∈ R denote the intercept, 𝛽 ∈ R𝑑 the coefficients of features, and 𝑥𝑖 ∈ R the feature vector of the 𝑖-th sample. All we do is solve for 𝛽0 :
𝑛
∑︁ ∑︁𝑛
𝑖=1 𝑖=1
𝑛 Usually, we assume that all features are centered, i.e.,
𝑛
1 ∑︁
𝑛 𝑖=1 Which simplifies the solution for 𝛽0 to be the average response :
𝑛 𝑛 𝑑
𝛽0 = 𝑦𝑖 − 𝑥𝑖𝑗 𝛽𝑗
𝑛 𝑑 𝑛
= 𝑦𝑖 − 𝛽𝑗 𝑥𝑖𝑗 (12.6)
𝑛
= 𝑦𝑖
∑︀𝑛 If in addition, we also assume that the response 𝑦 is centered, i.e., 𝑛 𝑖=1 𝑦𝑖 = 0, the intercept is zero and thus eliminated.
12.1.2 Evaluation The coefficient of determination (R-squared) is a statistical metric that is used to measure how much of the variation in outcome can be explained by the variation in the independent variables. 𝑅2 always increases as more predictors are added to the MLR model even though the predictors may not be related to the outcome variable.
𝑅2 by itself can’t thus be used to identify which predictors should be included in a model and which should be excluded. 𝑅2 can only be between 0 and 1, where 0 indicates that the outcome cannot be predicted by any of the independent variables and 1 indicates that the outcome can be predicted without error from the independent variables.
When interpreting the results of a multiple regression, beta coefficients are valid while holding all other variables constant (all else equal). The output from a multiple regression can be displayed horizontally as an equation, or vertically in table form.
Citations References
1. OLS Estimator
2. Yamano Lecture Note
Machine Learning 101 70 Chapter 12. Multiple Linear Regression
CHAPTER 13
Example Multiple Linear Regression
Different methods used to demonstrate Multiple Linear Regression
• Ordinary Least Square
– Python from scratch
– Scikit
• Gradient Descent
– Python from scratch
– Scikit
13.1 Ordinary Least Square
Loading Boston house-price from sklearn.datasets :
7 data = datasets.load_boston()
Define the data predictors and the target data :
12 # define the data/predictors as the pre-set feature names 13 df = pd.DataFrame(data.data, columns=data.feature_names)
14 15 # Put the target (housing value -- MEDV) in another DataFrame 16 target = pd.DataFrame(data.target, columns=["MEDV"])
13.1.1 Python
Using the Ordinary Least Square method derived in the previous section :
Machine Learning 101 29 Xt = np.transpose(X)
30 XtX = np.dot(Xt,X)
31 Xty = np.dot(Xt,y)
32 coef_ = np.linalg.solve(XtX,Xty)
Set of coefficiants as calculated :
[ -9.28965170e-02 4.87149552e-02 -4.05997958e-03 2.85399882e+00
-2.86843637e+00 5.92814778e+00 -7.26933458e-03 -9.68514157e-01
1.71151128e-01 -9.39621540e-03 -3.92190926e-01 1.49056102e-02
-4.16304471e-01]
13.1.2 Scikit
Lets define our regression model :
37 model = linear_model.LinearRegression(fit_intercept=False)
Note: we are using the same linear_model as in our simple linear regression method. Also the
fit_intercept has been set to False. This is just to validate our derivation in the previous section.
fit_intercept by default is set to True and will not assume that our response 𝑦 is centered and will give an
model.intercept_ value.
Fitting our model :
38 model = model.fit(X,y)
[ -9.28965170e-02 4.87149552e-02 -4.05997958e-03 2.85399882e+00
-2.86843637e+00 5.92814778e+00 -7.26933458e-03 -9.68514157e-01
1.71151128e-01 -9.39621540e-03 -3.92190926e-01 1.49056102e-02
-4.16304471e-01]
Evaluating our model :
42 y_predicted = model.predict(X)
43 print("Mean squared error: %.2f" % mean_squared_error(y, y_predicted))
44 print('R2 : %.2f' % r2_score(y, y_predicted))
Mean squared error: 24.17
R2 : 0.71
CHAPTER 14
Indices and tables
• genindex
• modindex
• search
73 |
nuget_FullPageJS.jsonl | personal_doc | Unknown | # fullpage.js
Date: 2015-04-09
Categories:
Tags:
19,385 at Github
I'm a web developer who aims to combine the beauty of design with the logical perfection of coding. Training myself every day and pushing my own limits to discover new ways of creating a great experience for the users.
I'm 36 years old, from Spain. I'm a computer scientist and hold an MSc in Computer Engineering and a BSc in Technical Engineering In Computer Science For Management, both from Burgos University. I also enjoyed a scholarship at the Millersville University of Pennsylvania in the USA.
I started getting passionate about web developing at the age of 16 when I created my first website. Since then I kept learning day by day on my own until today.
Currently working mainly on fullPage.js. And doing side projects like CSS Shadow Gradients.
Here's a list of podcasts where I've been invited:
Do you have a podcast and want to invite me over? Contact me.
Here's a list of things I use as a developer & creator.
I also like photography! Check out my pics on flickr.
Estimates, questions, information? Don't hesitate to contact me.
Created as a side project, fullPage.js is a jQuery and Javascript open source library to create full screen scrolling websites. Used and trusted by companies such as Google, Ebay, McDonalds, EA, Nikon, Vodafone and British Airway, it is currently the most popular, used and complete library of its kind.
It is currently placed in the top 50 most starred and most forked Javascript projects in the world at Github and it has been named by printed magazines such as Web Designer Magazine.
More than 140 people have contributed to this open source project and still contributing nowadays thanks to its popularity I was named a trending Javascript developer on GitHub multiple times.
Check the fullpage.js landing page.
The library provides a straightforward way to create full-screen auto-scrolling websites. A task that can seem simple initially but gets quite complicated once you start covering issues such as browser compatibility, touch devices, kinetic trackpad scrolling, URL linking, accessibility, responsiveness, callbacks, etc.
What makes fullpage.js so powerful is the way it simplifies all this work for any developer who wants to have a production-ready website. Tested by thousands of developers on tens of devices and over a period of more than 9 years makes it a reliable tool to use. And of course, a precious time saver that allows developers to focus on developing the site and not its inner behavior.
Avaiable through the GPLv3 licence for React, Vue and Angular it is ideal to use in any kind of websites or comercial products.
Due to its popularity, it is also easy to find fullpage.js integrated into other platforms by third parties. From Wordpress themes to Ruby gems.
I also decided to provide official fullPage.js plugins for the Elementor and Gutenberg builder for WordPress, and soon for Divi!
The project has been growing with time and due to the diversity of requested features, I decided to provide fullpage.js extensions that can be added to enhance its behavior.
Brought into a jQuery plugin from a custom script I did for a website I worked in.
This simple script provides a way to create funny moving texts in a simple way. Not much else to say about it :)
Tired of not finding any library to validate images on the client side I decided to create one for my own use and for anybody else who want to save some time doing it. Yeah, we all know client-side validation is just one of the many steps and we should never rely only on it, but hey! It helps!
Personally, I'm using it in a side project where users can upload pictures and see the result in real-time before the image gets uploaded to the server side.
This library makes use of the magic numbers to detect different mime-types, therefore even when the mime-type gets modified we can still figure out it it is what it claims to be.
Here's a bunch of websites I've done from the scratch. You can see my name in the `author` meta tag in all of them.
So, I decided it was time to make the world a better place and build a CSS shadow gradients generator.
As a Gumroad user, I started to feel the need for more advanced analytics. I wanted to maximize profit and to do so I had to know how to take decisions based on data.
I started creating a small analytics panel for myself and then I realized many others were also missing better analytics in Gumroad so I decided to make it public and create a platform for it.
I created it from the scratch using CakePHP framework. It has been designed for desktops, mobile phones, and tablet devices. There is a big work on jQuery and CSS. Techniques such as a dynamic load of images were applied to speed up the load as well as javascript and CSS compression. Other techniques applied: SEO, CSS3 media queries, CSS3 animations, browser detection, and device detection.
Designed in CakePHP by using one of my plugins - fullPage.js. Its responsive design makes it look great on tablets and mobile phones. Fallbacks for old browsers have been applied for videos and css3 animations as well as techniques such as lazy loading, video compression, and CSS/JS minification. I made use of Velocity.js for SVG animations.
An artist's website that is available in 4 different languages.
Started working on it when I was a student. I created a full-screen lazy load slider for it that had to contain more than 250 images.
It contains more than 160 paintings and more than 800 images. I developed both the front-end and the back-end, allowing the author to change and upload new content. No CMS or framework was used. Plain PHP and MySQL. Resizing image techniques were applied.
A jQuery plugin to create multi-scrolling websites with two vertical panels or layouts. The screen will be split into two panels in a way they scroll vertically in opposite directions.
Designers look always for new ways of surprising their viewers and this plugin was born with that same purpose in mind. Ideal for the concept of split layouts.
Used by the community in multiple projects such as this Wordpress theme.
Created as a side project, pagePiling.js is a jQuery library that provides an auto-scrolling effect between sections piled one over another. Used by companies such as Facebook, Logitech, or Walt Disney Family Museum.
It was created from the basics of its big brother, fullpage.js, and it shares many of its methods and options. It was inspired in part by the website of the Hugeinc design which led me later on to create a tutorial in onextrapixel about how to create a site like it.
Unlike many modern scripts or CSS3 effects, I decided to keep providing support for old browsers such as IE 8 or Opera 12, which makes the library quite unique within the field.
## Hotmail Not Working On iPhone? [Here’s the Fix]
## Get Crunchyroll on Samsung TV [ ✓ Best Ways ]
## Hisense TV Not Connecting to Wi-Fi [✓Easy Solutions]
## Network Not Available Chromebook [ ✓Quick Fix ]
## Frozen Roku? Here’s How to Fix It [Proven Ways]
## Onn Roku TV Remote Not Working [Easy Fix]
## TCL Roku TV – Lost Remote and Have No WIFI?[✓Solved]
## 4 Digit Code for Samsung TV [How to Find It]
# Hotmail Not Working On iPhone? [Here’s the Fix]
Hotmail, now known as Outlook.com, is one of the most popular and reliable email services millions worldwide use. Nevertheless, even the most reliable service can face occasional glitches. One such issue is not working as expected in devices such as the iPhone.
It could be not sending or receiving messages, login issues, or even syncing problems. These issues can disrupt both personal and professional communication.
But if you find yourself in such a situation, don’t fret! This article will examine why Hotmail is not working on your iPhone and showcase the possible fixes. We will also look at properly setting it up to avoid such issues.
## Does Hotmail Still Work in 2023?
No, Hotmail no longer works in 2023. It was discontinued in 2013 when Hotmail migrated to Outlook. However, you can still use your Hotmail account to sign in to the Outlook.com website or app. To do so, visit the Outlook sign-in page and enter your Hotmail address. Then click next and enter your password to sign in.
Like Hotmail, Outlook provides free server versions. However, it offers additional features and improved security measures compared to Hotmail. Outlook now includes extra warnings for emails that contain data or photos downloaded from questionable sources.
In addition, Outlook allows for a collaborative calendar and an embedded to-do list. So, you can conveniently create, edit, add, and manage calendar invites to share with others.
Note: Once logged in to Outlook.com, you can change your email address from Hotmail.com to Outlook.com.
Find out more about how to upgrade from Hotmail to Outlook on the Microsoft website.
## Why Is Your Hotmail Not Working on Your iPhone?
Here is why your Hotmail or Outlook is not working on your iPhone:
* Server Issues.
If the Hotmail servers are down or facing issues, your iPhone may not authenticate your account correctly. This can result in you being unable to log in or access your emails. Server problems can also interrupt synchronization, leading to differences between what’s on the server and your iPhone. * Network Problems.
You may not access your Hotmail account with a weak internet connection. Hotmail needs a stable connection to fetch your email and sync with your calendar or contacts. A weak connection can also cause the Hotmail app to perform slowly or disconnect frequently.
* Outdated Outlook Version.
If you have an old version of Outlook, it may not work well with the latest updates on your Hotmail or Outlook servers. This can cause syncing issues, making your Hotmail not work on your iPhone. * Wrong Username or Password.
If you enter incorrect login information, your device won’t authenticate. The server will reject your login attempt, preventing access to your Hotmail account.
## 7 Ways to Fix Hotmail Not Working on Your iPhone
These are the seven ways to fix Hotmail not working on your iPhone:
Your Hotmail requires an internet connection to function. So, your first step is to check the connection. Confirm if the Wi-Fi icon appears on top of the screen, or click on your settings and tap Wi-Fi to verify if it’s turned on.
If Wi-Fi is connected, but you still experience the issue, forget the network by tapping it and then selecting “Forget Network.” Wait for a few minutes before reconnecting.
To reconnect your iPhone to Wi-Fi, tap on the network again and enter the password. If this doesn’t work, try connecting to a different Wi-Fi network.
Read more about how to connect your iPhone to a Wi-Fi network on Apple Support.
### 2. Check “Fetch New Data” Settings
Checking the “Fetch New Data” settings is another simple step to fix Hotmail not working on your iPhone. Checking this data lets you know how your iPhone receives new emails and data from your email accounts so that you can adjust the settings accordingly.
Here is how to check the “Fetch New Data” settings on your iPhone:
* Step 1. Open “Settings” on your iPhone.
* Step 2. Tap “Mail > Accounts”.
* Step 3. Click “Fetch New Data”.
* Step 4. Pick a “Setting”.
You can choose to have your Mail app fetch data automatically, manually, or on a schedule.
Note: For iOS 11 or later and iPadOS, fetching new data is automatically set by default.
Learn more about how to check your iPhone’s “Fetch New Data” settings on Apple Support.
### 3. Change “Mail Days to Sync”
Your “Mail Days to Sync” settings determine how far back in your email account history your iPhone will sync emails. If this setting is too low, you may not see older emails. Change the setting to “No Limit” to fix the issue.
Follow these steps to change the “Mail Days to Sync” settings on your iPhone:
* Step 1. Go to “Settings” on your iPhone.
* Step 2. Select “Mail > Accounts”.
* Step 3. Tap on your “Hotmail account”.
* Step 4. Click “Mail Days to Sync”.
* Step 5. Select “No Limit”.
### 4. Re-add Your Hotmail Account to Your iPhone
When nothing from changing your “Mail Days to Sync” fixes your Hotmail that is not working on iPhone, try removing and re-adding your Hotmail account. This can reset the account settings and help resolve any issues preventing your account from working.
These are the steps to re-add your Hotmail account to your iPhone:
* Step 1. Click “Settings” on your iPhone.
* Step 2. Select “Mail > Accounts”.
* Step 3. Tap on your “Hotmail Account”.
* Step 4. Select “Delete Account” at the bottom page.
* Step 5. Re-add your “Hotmail account”.
Return to your “Settings > Mail > Accounts” and select “Add Account” to re-add your Hotmail account.
### 5. Clear Your iPhone Cache
It’s also possible that your Hotmail issues are caused by too many accumulated cookies and cache in your email. You can clear them by going to “Settings > Safari > Clear History and Data” and clicking “Confirm” to resolve this.
Read more about How to Clear App Cache on iPhone.
When you do this, it will delete your browsing history, recent searches, and website permissions to track your location or send you notifications. Clearing your browsing history in Safari does not delete any browsing histories saved by websites you visited or browsing history in other apps.
Learn more about how to clear your iPhone cache on Apple Support.
### 6. Update Your iOS Version
Updating your iOS version can also help solve unwanted issues on your iPhone, including the Hotmail issues.
These are the steps to update your iOS version:
* Step 1. Navigate to “Settings” on your iPhone.
* Step 2. Tap “General > Software Update”.
* Step 3. Select the “Download and Install” option.
* Step 4. Enter your “Passcode” to begin the “Update” process.
Find out more about how to update your iOS version on your iPhone on Apple Support.
### 7. Reset All Settings on Your iPhone
If none of the previous steps fixes your iPhone Hotmail not working, resetting all your settings can help.
Here are the steps to reset all settings on your iPhone:
* Step 1. Tap “Settings > General”.
* Step 2. Click “Transfer or Reset iPhone”.
* Step 3. Select the “Reset” option.
* Step 4. Choose “Reset All Settings”.
* Step 5. Enter your “iPhone passcode” and tap “Reset”.
* Step 6. Wait a few seconds for your device to erase and “Restart”.
Read more about resetting all settings on your iPhone on Apple Support.
## How to Properly Set Up Hotmail on Your iPhone
You can properly set up Hotmail on your iPhone from your Settings app. Remember that Microsoft upgraded Hotmail to Outlook, so your account will automatically become an Outlook account. Also, make sure your iPhone is connected to the internet.
Follow these steps to set up Hotmail on your iPhone:
### Step 1: Open “Settings” on Your iPhone
### Step 2: Tap “Mail > Accounts”
### Step 3: Select “Add Account”
### Step 4: Click “Outlook.com”
### Step 5: Enter Your “Hotmail Email Address > Next”
### Step 6: Key-In Your “Hotmail Email Password” and Hit “Sign in”
### Step 7: Click “Save”
There will come a time when your iPhone will acquire damage, no matter how durable. For instance, you can drop your phone in water, or it may end up bent in one way or another. Simply put, there are many ways an iPhone may end up damaged.
This can be heartbreaking, but it is not the end of the road. In this article, we’ll discuss where to sell a broken iPhone.
## Are Broken iPhones Worth Anything?
Yes, broken iPhones are worth quite a bit, depending on the extent of the damage and the phone’s condition. An iPhone with a broken screen can sell for $30-$50 if its hardware is in perfect condition. If sold to the right buyback program, most broken iPhones are worth more than most users imagine.
To get a broken screen fixed by Apply can cost more than $150. But, to sell a broken iPhone might only lead to a $50 loss. That’s why, if possible, it’s best to re-sell a damaged iPhone and use the cash to upgrade your current model.
Related article: Fix iPhone camera not working and showing a black screen
## Why Do People Buy Broken iPhones?
People buy broken iPhones to use their spare parts or to swap pieces. Large-scale refurbished companies buy broken iPhones, repair them in bulk, and sell them at a lower price than Apple. These phones are then branded and sold as refurbished iPhones.
Some people prefer buying a broken iPhone because it is pocket-friendly, so the market for refurbished models is significant. Therefore, there are also many damaged iPhone buyers.
A broken iPhone is available for a fraction of a new one in the same model. For international buyers, it is also a good way of getting a model unavailable in their areas.
## What Are the Best Places to Sell Broken iPhones?
Generally, there is a smaller market for broken iPhones than new ones. Before selling at a local shop or online, it is best to compare different offers to get a good deal.
Let’s look at the best websites where you can sell your phone for cash near you:
### 1. BankMyCell
BankMyCell is an online marketplace that connects sellers to buyback programs. It acts as a comparison site that allows you to ensure that you are getting the best deal for your broken iPhone.
All you need to do is go to BankMyCell’s official site, choose the phone model, and select a suitable deal. Typically, you will receive the payout within 24 hours after the store gets your phone.
If the buyer fails to purchase the iPhone, BankMyCell will return the phone to you without additional payments.
### 2. GadgetGone
Gadget Gone is a professional refurbished company that repairs broken iPhones for much less than Apple. But instead of fixing the phone and sending it back to you, they purchase it and give you a reasonable quote.
To get started, get a quote from GadgetGone, ship the device, and get paid. You can get your payout in as little as two days.
### 3. Gazelle
Another place to sell a broken iPhone for cash is Gazelle. The company started its operations in 2006, making it one of the oldest buyback programs in the game.
When looking to sell a broken iPhone on Gazelle, first, you have to fill out an online form. Select your carrier, and answer a few questions about the phone’s condition.
The site presents you with an offer. If you accept it, you’ll receive a shipping kit to use when shipping the phone for inspection. Then, wait for a payout. Payments happen approximately in 7 to 10 days.
### 4. SellCell
SellCell works by comparing prices from leading buyers across the US to help sellers get the best deal. The broken iPhone’s price depends on its model, network, internal storage capacity, and overall condition.
To sell on SellCell, find the make of your iPhone, select the best deal, and ship it to the buyback company for free.
When using this online market, it is essential to note the price may change from the initial quotation based on the condition, as the company will examine your broken phone upon receiving it. But it guarantees you’ll get the most cash for your device.
### 5. SaveGadget
SaveGadget offers a quick and convenient way to sell a broken iPhone. The platform has an easy process to assess the model and condition of your iPhone.
Luckily, SaveGadget offers free USPS shipping labels for trade-ins. All sellers need to do is print it out and ship the product within 14 days.
However, note the price might change after official assessment once your gadget reaches SaveGadget. Therefore, it is vital to ensure everything is carefully packed to avoid further damage during transportation.
If the offer is adjusted, you have 5 days to accept or decline the new price before receiving payment. By any chance, if you fail to respond within this timeframe, you’ll receive payment based on the new adjusted offer.
### 6. Decluttr
Decluttr operates similarly to Gazelle. Search for your iPhone model, select internal storage size, service provider (if any), and overall condition. You’ll receive an offer that you can accept or reject.
After accepting the offer, the platform sends you a prepaid mailing label for shipping the iPhone. Sending the charger, original box, or other accessories is optional.
You’ll get an updated quote if the phone doesn’t match the described conditions. You get paid after accepting the new quote.
## Things You Should Do Before Selling Your Broken iPhone
These are things you should do before selling a broken iPhone online or at a local store:
* Back-up Your Details.
Backing up your photos on your iPhone and any valuable information lets you quickly retrieve the data when needed. Go to “Settings> [your name]> iCloud Backup> Back Up Now”. * Transfer All Your Data to Your New Device.
If you have a new iPhone, transfer all data from the previous one using iCloud, iTunes, or Finder. In cases where you don’t have access to the old device, generate a two-factor authentication to regain access. * Get Rid of All Personal Information.
Sign out from all personal and Apple accounts to prevent your details from landing in the wrong hands. * Unpair All Devices.
Ensure you unpair your Apple Watch or other Apple devices that you have connected to your old iPhone. Also, remove the old iPhone from your list of trusted devices. * Conduct a Factory Reset.
Go to “Settings > General > Transfer or Reset > Erase All Content and Settings” to restore your iPhone to factory reset. Ensure you have your iPhone ID if you have enabled Find My Device. * Cancel Any Active Service Plans.
Call your service provider and ensure you don’t have an active plan before selling your old iPhone. If you aren’t using a SIM card, ask your provider about transferring the service to the new owner. Learn more about what to do before you sell, give away, or trade in your iPhone on Apple Support.
## What Can You Do With an Unsellable Broken iPhone?
If, for any reason, your phone is unsellable, here are other good cause options you can consider.
* Donate Your iPhone.
Charitable organizations can put your broken iPhone to good use. One such charity organization is Cell Phones for Soldiers, which provides military members and veterans phones for free communication services. * Recycle It.
Instead of throwing your iPhone away, recycle it to save the environment. You can drop off your phone for free in many recycling centres. * Trade-in at Apple.
You can turn in your broken iPhone at Apple and get credit to purchase a new phone. If it isn’t eligible for trade-in, Apple recycles it for free. Read more about Apple trade-in and upgrade programs on the Apple Website.
## Yes, It Is Possible to Sell a Broken iPhone
Don’t just throw your iPhone away. If you’re looking for where to sell your phone for cash near you, several online platforms offer buyback programs. However, note that the value you’ll get depends on the phone’s condition.
If you can’t sell the phone, consider recycling, trading it in, or donating the iPhone to a good cause.
# Get Crunchyroll on Samsung TV [ ✓ Best Ways ]
Crunchyroll is the best place for anime lovers. But if you’re trying to watch it on a Samsung TV, you’ve probably noticed Crunchyroll is unavailable.
But did you know there’s a way to access Crunchyroll on your Samsung TV even if the device doesn’t natively support it?
In this article, we’ll cover how to watch Crunchyroll on Samsung TV and why accessing the site on all TVs is impossible. Let’s get into it!
## Why Doesn’t Samsung TV Have Crunchyroll?
Samsung TVs don’t have Crunchyroll because the app is only available for Android OS, while Samsung TVs operate on Tizen or Orsay OS. Samsung supported the Crunchyroll app for TVs back then but discontinued it long ago. Samsung did not release a reason for the discontinuation.
Crunchyroll is an entertainment company distributing and streaming anime and television series. It also offers Japanese comics, movies, and live-action dramas. Its anime library has over 16,000 hours of content.
Right now, the only smart TVs that support Crunchyroll are Google, Android, Roku, and Amazon Fire TVs.
Find out more about the Crunchyroll-ready devices.
## How to Watch Crunchyroll on Samsung TV
Crunchyroll is available on gaming consoles, including Xbox, Nintendo Switch, PlayStation 4 and 5. But if you do not have these devices available, can you get Crunchyroll on Samsung TV?
These are the alternative ways to watch Crunchyroll on Samsung TV:
### Method 1. Cast Crunchyroll on a Samsung TV Using an iPhone
Since Crunchyroll is available on iPhone, you can screen mirror the streaming app to your Samsung TV. This requires a reliable internet connection.
Here are steps on casting Crunchyroll on Samsung TV using an iPhone:
Important! Before casting, ensure your iPhone and Samsung TV are connected to the same Wi-Fi.
* Step 1. Download & open the Crunchyroll app for iPhone.
* Step 2. Look for the casting icon.
* Step 3. Select your “Samsung TV” on the list of devices available.
* Step 4. Wait for around 10 seconds for the casting to complete.
### 2. Mirror Crunchyroll from Samsung Phone to Samsung TV
If you are using a Samsung Galaxy phone, Samsung has a feature called Smart View that allows you to connect the two devices.
Smart View is a screen mirroring app that allows you to view all your phone’s content on your TV screen. However, it’s worth noting that it does not work for non-Samsung smartphones.
These are the steps to mirror Crunchyroll from Android to a Samsung TV:
* Step 1. Open “Settings > General > External Device Manager”.
* Step 2. Click “Device Connection Manager > Device List”.
* Step 3. Select your Samsung phone from the list & click “Allow”.
* Step 4. Access “Smart View” on your phone.
* Step 5. Select your TV from the list of available devices.
* Step 6. Click “Start Now” to begin casting.
* Step 7. On your TV, select “Allow” to pair your phone and TV.
* Step 8. Open Crunchyroll on your phone & start watching.
Learn more about Samsung Smart View.
### 3. Use Chromecast
If your Samsung TV has no built-in casting option, you can use a Chromecast device to cast Crunchyroll into your TV.
Important! Your Samsung TV and smartphone should be connected to the same Wi-Fi network.
Here’s how to watch Crunchyroll on Samsung TV using Chromecast:
* Step 1. Connect the Chromecast to an HDMI port on your TV.
* Step 2. Press `Home` on your remote.
* Step 3. Select “Chromecast” from the TV’s sources.
* Step 4. Download & open Crunchyroll on your smartphone.
* Step 5. Choose a show to watch on Crunchyroll.
* Step 6. Tap the “Cast” icon.
* Step 7. Select your Chromecast from the list of available devices.
### 4. Use an HDMI Cable
These are the steps for setting up a HDMI connection:
* Step 1. Connect the HDMI cable to your laptop or computer & TV. (You can get one from Amazon)
* Step 2. Press the `Home` button on your remote.
* Step 3. Select the “PC” option to display your laptop screen on the TV.
* Step 4. Open “Crunchyroll” on your laptop or computer.
* Step 5. Select a show & start watching.
### 5. Stream Crunchyroll Using an Xbox
If you have a gaming console, you can use it to watch Crunchyroll or Samsung TV. The Xbox, Nintendo Switch, PlayStation 4, and PlayStation 5 have access to the Crunchyroll app.
Note: By connecting your console to your Samsung TV, you are using it as the console’s monitor. This allows you to watch Crunchyroll once you install it on the console.
Here are the steps to stream Crunchyroll on Samsung TV through Xbox:
* Step 1. Plug an HDMI cable into your Xbox & Samsung TV.
* Step 2. Turn on the console and TV.
* Step 3. Press the `Home` button on the remote.
* Step 4. Select “Xbox” from the input sources.
* Step 5. Follow the on-screen instructions to set up the Xbox app on Samsung TV.
* Step 6. Go to the “app store” on your console.
* Step 7. Download & install the “Crunchyroll” app.
* Step 8. Select a show and start watching!
Read more on how to Connect your Xbox One to Your Home Theater System.
### 6. From Samsung TV Browser
Using a smart TV browser can be tedious. However, if you have a stable Wi-Fi network, it is one of the best options to watch Crunchyroll on Samsung TV.
Follow these steps to access the Crunchyroll website on Samsung TV:
* Step 1. Press the `Home` button on your remote.
* Step 2. Scroll & select “Internet”.
* Step 3. In the browser, search crunchyroll.com.
* Step 4. Log in to your Crunchyroll account and start watching.
Note! You can use a wireless mouse or keyboard to make searching and navigating the Samsung browser easier.
### 7. Download Crunchyroll Using Amazon Firestick
Amazon Firestick is one of the streaming devices that supports Crunchyroll. The app is free to download on the device, but you must pay a monthly subscription to watch anime ad-free.
You can get Amazon Firestick on Amazon.
These are the steps to get Crunchyroll on Samsung TV using a Firestick:
* Step 1. Connect the Firestick to your Samsung TV.
* Step 2. Click `Home` on your remote.
* Step 3. Change the source to “Firestick”.
* Step 4. Download “Crunchyroll” from Amazon’s app store.
* Step 5. Open Crunchyroll & start watching your favourite show.
### 8. Use Roku Stick
If your location does not support Amazon Firestick, you can use Roku as an alternative streaming device for watching Crunchyroll on Samsung TV.
Here are the steps to access Crunchyroll on Roku:
* Step 1. Press the `Home` button on your Roku remote.
* Step 2. Go to “Streaming Channels > Search Channels”.
* Step 3. Type in “Crunchyroll” in the search box.
* Step 4. Select “Add Channel” to download Crunchyroll on your Roku.
* Step 5. Click “Go to Channel” to open and start watching Crunchyroll.
Here’s the link to the Chrunchyrool channel for Roku
## Does Crunchyroll Stream All Anime?
No, Crunchyroll doesn’t stream all anime, but the most popular Anime is available on the platform. Crunchyroll has partnerships with Japanese media companies and licenses to many anime titles. This allows the platform to stream anime series within a day after the original Japanese TV broadcast.
Apart from anime, you can also enjoy Asian dramas on CR from Japan, South Korea, Singapore, Taiwan, and China. The platform has a free ad-supported subscription and premium membership.
There are three premium tiers: $7.99, $9.99 and $14.99. These paid tiers give users unlimited access and the privilege of watching episodes as soon as they are released.
Learn more about Crunchyroll New Membership Tiers.
## It’s Possible to Watch Crunchyroll on Samsung TV
If you’re wondering why Crunchyroll is not on Samsung TV, it is because the devices don’t run on Android OS.
However, there are other ways to watch the streaming app on Samsung TVs using third-party devices. These devices include gaming consoles, Roku, Firestick, and Chromecast.
You can also directly cast Crunchyroll from your iPhone or Android smartphone to Samsung TV. Even though not all anime series are available on Crunchyroll, there is still an extensive collection of 40,000+ episodes.
# Hisense TV Not Connecting to Wi-Fi [✓Easy Solutions]
Having a smart TV that can’t connect to the internet defeats the whole purpose of owning one. So, your Hisense TV not connecting to Wi-Fi can be inconvenient, especially in an era where online streaming is an essential entertainment.
We will explore why your Hisense TV is not connecting to Wi-Fi and provide step-by-step instructions on how to connect your Hisense TV to Wi-Fi. This will avoid similar issues in the future. Let’s get started!
## How to Connect Your Hisense TV to Wi-Fi
If you just bought your Hisense TV and are unsure how to set it up, you must first connect it to your home Wi-Fi.
Follow the steps to connect Hisense TV to Wi-Fi:
### Step 1: Turn On Your “Hisense TV”
### Step 2: Press the `Home ` Button on Your TV Remote
### Step 3: Select “Settings > Network > Network Configuration”
### Step 4: Click the Wi-Fi Type You Have
If you’re using a wireless network, select “Wireless”. Otherwise, select “Ethernet”.
### Step 5: Select Your “Wi-Fi Network” From the List of Available Networks
### Step 6: Enter Your Wi-Fi Password & Click “OK”
## Why Is Your Hisense TV Not Connecting to Wi-Fi?
These are the common reasons why your Hisense TV is not connecting to Wi-Fi:
* Weak Wi-Fi Signal.
When your Wi-Fi signal is unstable, your Hisense TV keeps on disconnecting from Wi-Fi or cannot establish a connection with the router. Remember that Wi-Fi signals weaken over distance or when obstructed by physical barriers like walls.
* Software Issues.
TV software bugs and glitches can cause issues connecting the TV to the Wi-Fi network. The problem may also happen if it’s your Wi-Fi router that’s glitching.
* Network Congestion.
When there are too many devices connected to your Wi-Fi, it may become overloaded and unable to handle the traffic. This prevents your TV from establishing a connection, as the data packets it sends and receives can get lost in the congestion.
* VPN Issues.
VPNs route internet traffic through secure servers. If your VPN experiences issues such as server downtime, your Hisense TV won’t connect to Wi-Fi. VPN issues prevent the TV from securely connecting, leading to transmission problems.
## How to Know if Your Hisense TV Has Wi-Fi
To know if your Hisense TV has Wi-Fi check the network configuration settings or the Wi-Fi icon on the screen. Go to “Network” in the settings menu and look for the “Network Configuration” option. If the status shows “Connected” or displays the Wi-Fi network name, then your Hisense TV has Wi-Fi.
You can also check your Hisense TV’s screen, particularly the taskbar, for a Wi-Fi icon. A highlighted Wi-Fi icon indicates that your TV has a network connection. But, if it is empty or greyed out, your TV is disconnected.
## 9 Ways to Fix Your Hisense TV Not Connecting to Wi-Fi
Here are the nine ways to fix Hisense TV not connecting to Wi-Fi:
If your Hisense TV is not connecting to Wi-Fi, you should check if your internet connection works properly. You can access the internet from other devices, such as your smartphone or tablet.
If these devices connect, your Wi-Fi is functioning without issues. But if the other devices cannot also connect, check your router for issues.
You should see a green light to confirm that the internet is available on your router. A red light indicates that there is no internet connection.
### 2. Restart Your Hisense TV
If your internet connection is working fine, a simple restart can be what you need. Restarting your Hisense TV gives it a fresh start, which could help solve minor internet issues.
Follow these steps to restart your Hisense TV:
* Step 1. Press the `Home` button on your TV remote.
* Step 2. Navigate to “Settings”.
* Step 3. Tap the “Device preferences” option.
* Step 4. Hit the “About” option.
* Step 5. Click “Restart” and “Confirm”.
### 3. Soft Reset Your Hisense TV
Another simple way to fix Hisense not connecting to the Wi-Fi issue is by unplugging and plugging back your TV. This process will soft reset your TV and resolve current or voltage problems.
A soft reset will also clear corrupted temporary data that could be causing the problem.
* Step 1. Turn off the “Hisense TV”.
* Step 2. Unplug the “Power cord” from the power outlet.
* Step 3: Wait for about 60 seconds.
* Step 4. Plug your TV back into the power source & turn it on.
### 4. Clear Network Cache
Clearing your network cache eliminates all the unnecessary data from your TV and improves its performance.
Follow these steps to clear the network cache on your Hisense TV:
* Step 1. Tap “Quick menu” on your Hisense TV.
* Step 2. Click “Settings > System > Application Settings”.
* Step 3. Press “Clear cache > OK”.
* Step 4. Select “Clear > OK” to delete the cache.
* Step 5. Navigate to “Delete cookies > OK”.
* Step 6. Hit “Clear > OK” to delete cookies.
### 5. Move the Router Closer
Wi-Fi connections become weaker when the distance between the router and the device increases. So, if your router is so far away, your Hisense TV Wi-Fi might keep turning off or have difficulty connecting.
Try moving the router closer to your Hisense TV and make sure to install your router somewhere unobstructed.
If your router transmits at 2.4GHz, ensure it is not located beyond four walls. Although 2.4GHz signals can travel through walls, such obstructions can weaken them.
If your router transmits as a 5GHz, position your router in the same room as your TV for optimal signal strength.
### 6. Update Your Hisense TV Firmware
If everything else is okay, but your Hisense TV is still not connecting to Wi-Fi, update your firmware to get the latest bug fixes. Updating your firmware can resolve issues like weak Wi-Fi signals or unstable connections.
Here’s how to update your Hisense TV firmware:
* Step 1. Press the “Home” button on your Hisense TV remote.
* Step 2. Select “Settings”.
* Step 3. Scroll down and tap “Device preferences > About > System Update”.
* Step 4. Install the available update.
Learn more about updating your TV’s firmware from Hisense Official Support.
### 7. Use an Ethernet Cable
Using an Ethernet cable will rule out the issues related to the wireless Wi-Fi network, like frequent disconnection or unstable signal. Besides, it will ensure high and reliable internet speeds.
To set up your Ethernet connection, plug one cable end into the port on the back of your TV and the other into the router. Ensure the cable is long enough to reach from your TV to your Wi-Fi router.
Read more about using an Ethernet cable from Hisense Setup Guide.
### 8. Reset Your Hisense TV
If you can’t find the cause of your Hisense TV not connecting to Wi-Fi, a simple reset returns your device to its original settings, which could help the issue.
Note: Resetting your Hisense TV will erase all data. Make sure to write down important information.
* Step 1. Click the `Home ` button on your Hisense TV remote.
* Step 2. Select “Settings”.
* Step 3. Choose “Device preferences”.
* Step 4. Tap “About”.
* Step 5. Select “Reset” and “Confirm” to complete the process.
### 9. Contact Hisense Customer Support
If a reset doesn’t resolve your issue, it’s time to contact Hisense customer support. You can visit their official website or contact them via their customer support number. The customer support team will help you troubleshoot the issue or provide a replacement if necessary.
It’s worth noting that Hisense TVs have a one-year legal warranty. You may be eligible for a free repair or replacement if your TV had damage within a year after you bought it.
## Can You Connect Your Phone to Your Hisense TV Without Wi-Fi?
Yes, you can connect your phone to your Hisense TV without Wi-Fi using a USB cable, Bluetooth, or screen mirroring. This will allow you to watch shows saved on your phone on your TV screen. But without Wi-Fi, you won’t be able to watch online content from streaming apps.
You can use a phone-to-USB cable (like this one from Amazon) to connect your phone to your TV’s USB port. Then, change your Hisense TV’s source to the HDMI port where your phone is connected. This will allow you to access your phone’s content on your TV.
Another option is to enable Bluetooth connectivity on both devices and pair them up to establish a connection. You can also use screen mirroring apps supported by your phone and TV, such as Miracast and Chromecast.
## Does Hisense TV Support 5G?
Yes, some Hisense TVs support 5G, but not all models are 5G-capable. To check if your model supports the 5GHz network, refer to your TV user manual or search for your model specifications online. Alternatively, you can look for the specific Wi-Fi name from the list of available networks.
If your model supports the 5GHz network, you can connect to it by going to your network settings on the TV and selecting the 5GHz network.
If your TV doesn’t have built-in Wi-Fi, you can use a USB Wi-Fi dongle, like this one from Amazon, to enable connectivity.
# Network Not Available Chromebook [ ✓Quick Fix ]
For Chromebook users, a stable internet connection is crucial for online activities. But occasionally, they may encounter an error saying “network not available” on their Chromebooks, interrupting their online activities.
Let’s explore the common causes and fixes for the error about the unavailable network on Chromebook.
## Why Does Your Chromebook Say Network Not Available?
This is why your Chromebook says the network is not available.
* Outdated Software.
Your Chromebook won’t connect to Wi-Fi if the operating system is outdated, as it can cause compatibility issues between the device and the network. * Wi-Fi Connection Problems.
If you have a weak Wi-Fi network, your Chromebook may have difficulty connecting to it. The same issue can happen if your Wi-Fi is out of your device’s range. * Router Issues.
Issues with the router, such as incorrect configurations or hardware malfunctions, can disrupt your Wi-Fi network. This can lead to the network not available error on your Chromebook. * Incorrect Wi-Fi Settings.
Misconfigured Wi-Fi settings will prevent your Chromebook from connecting. Similarly, typing the wrong password when setting up your device’s Wi-Fi can cause a network unavailable issue.
## How to Enable Network on Your Chromebook
Here is how to connect your Chromebook to Wi-Fi:
### Step 1: Open Your Chromebook “Settings”
To locate your Chromebook settings, click on the time icon in the bottom-right corner of your screen.
### Step 2: Turn the “Wi-Fi” On
### Step 3: Tap the “Wi-Fi” Options to See the Available Networks.
### Step 4: Choose the “Network” and “Connect”
Read more about how to enable network on your Chromebook.
## 7 Ways to Fix Network Not Available on Chromebook
These are the seven ways to fix a network not available on a Chromebook issue.
### Method 1. Examine Your Wi-Fi Connection
Your Chromebook won’t connect if you accidentally click “Forget Network” on your Wi-Fi. Go to your device’s “Quick Settings” to see if there is a network connection. If not, navigate to your Wi-Fi settings, look for your Wi-Fi network, and enter its password to reconnect.
If your Wi-Fi adapter has a physical switch, check if you accidentally moved the switch to “off”. If that doesn’t work, disconnect and reconnect to your Wi-Fi.
To disconnect and reconnect to your Wi-Fi, go to your Wi-Fi settings and turn it off. Wait a few seconds, and turn the Wi-Fi on for the device to automatically reconnect to the network.
Learn more from Chromebook help.
### Method 2. Power Cycle Your Wi-Fi Router
Power cycling your router gives it a fresh start, which can help resolve common connection issues, including network not available on Chromebook.
Follow these steps to power cycle your Wi-Fi router:
* Step 1. Turn off your “Router”.
* Step 2. Unplug the “Power cord” from the back of the router.
* Step 3. Wait for 60 seconds.
* Step 4. Plug the “Power cord” back in
* Step 5. Turn your router on.
Your Chromebook may fail to connect to Wi-Fi if the router has bugs or glitches. Fortunately, a simple restart can fix the issue.
Here’s how to restart your Chromebook:
* Step 1. Press and hold the `Power` button for 3 seconds.
In older Chromebooks, the `Power` button is one of the keys on the keyboard. * Step 2. Wait for a few seconds.
* Step 3. Press the `Power` button again to restart your device.
Read more on how to restart your Chromebook.
### Method 4. Use a USB Ethernet Adapter
You can switch to an Ethernet cable to avoid experiencing Wi-Fi issues.
Cable connections are not affected by issues affecting wireless networks. They are more reliable and offer high-speed internet connectivity.
However, it’s worth noting that most Chromebook models do not support an Ethernet connection, so you must use a USB Ethernet adapter. (Like this one from Amazon)
Plug your adapter into the USB port on your Chromebook. Then, go to your Chromebook settings and select Ethernet from the available connection types to establish a connection.
### Method 5. Update Your Chromebook Software
Updating your device improves performance, stability, and Wi-Fi connectivity, fixing issues like the network not available on Chromebook.
These are the steps to update your Chromebook software:
* Step 1. Go to your “Settings > About ChromeOS” on Chromebook.
* Step 2. Hit “Check for updates”.
Your device will automatically install the available update once you click “Check for updates”. * Step 3. Restart your device.
Note: When using cellular data, the Chromebook will alert you of the required mobile data for updates.
Learn more about updates on Chromebook help.
### Method 6. Reset Your Chromebook to Factory Settings
When none of the tricks works, you can factory reset your Chromebook as your last resort. This option removes any issue that could trigger the “network not available Chromebook” issue, whether incorrect configurations, software conflicts, or system file errors.
So, before reset, ensure your files are backed up and sign out of your Google account.
Here is how to reset your Chromebook to factory settings:
Important! A factory reset will erase all your Chromebook’s contents.
* Step 1. Press and hold “Ctrl+Alt+Shift+R”.
* Step 2. Select “Restart”.
* Step 3. Tap “Powerwash > Continue”.
* Step 4. Log back into your Google account.
Note: After resetting your Chromebook, the first account you use to sign in will become the device’s owner.
Read more on resetting your Chromebook from Chromebook help.
### Method 7. Contact Chromebook Customer Support
If all else fails, you can contact Chromebook customer support via phone, online support, or the Google community forums. They can provide more guidance to fix any problems you may have.
Before doing so, check your warranty first. If your Chromebook is still under warranty, the manufacturer may repair or replace it for free.
But if your warranty has expired, you may have to pay for any repair or replacement costs. Remember that Chromebook gives a limited one-year warranty on products from the day of purchase.
Learn more about how to get Chromebook help.
## How to Force Chromebook to Connect to Your Home Wi-Fi
You can force the Chromebook to connect to your home Wi-Fi by choosing a preferred network. This way, your Chromebook will only connect to the selected network, even if you can access multiple networks nearby. Before doing so, connect your Chromebook to the Wi-Fi network you want to set as preferred.
Follow these steps to force your Chromebook to connect to your home Wi-Fi:
### Step 1: Open Your “Chromebook Settings”
### Step 2: Hit “Wi-Fi”
### Step 3: Select the “Network Connection” You Want to Set
### Step 4: Tap on “Prefer This Network”
Learn more about Chromebook help.
## Can You Connect Your Chromebook to the Internet Without Wi-Fi?
Yes, you can connect your Chromebook to the internet without Wi-Fi using your phone’s cellular data via a portable hotspot or USB sharing. However, Google does not recommend using these methods, as mobile data is less reliable than a Wi-Fi network.
To use mobile data on your Chromebook, look for the “Mobile Hotspot” option in your phone’s settings. Set the option up by assigning a username and password. Then, open your Chromebook’s Wi-Fi and look for your mobile data’s username. Type the password to connect to it.
We have a whole article on how to use Hotspot on a Chromebook.
Moreover, you can use USB sharing by connecting your smartphone to your Chromebook via a USB cable. Then, go to your phone’s settings and turn on USB tethering. This will allow your Chromebook to use your phone’s data connection to connect to the internet.
Learn more about using mobile data on Chromebooks.
# Frozen Roku? Here’s How to Fix It [Proven Ways]
Date: 2023-10-13
Categories:
Tags:
Having your Roku frozen can be very annoying, especially when it happens in the middle of your movie time. These freezing issues can be common issues with Roku TV and other devices, so do not panic. There’s a solution.
There are different solutions depending on why your Roku is freezing. Let’s examine these reasons to help you fix the frozen issue with Roku.
## Why Is Your Roku Frozen?
These are the main reasons causing Roku TV to get frozen:
* Faulty HDMI Connections.
Improper HDMI connections can cause freezing, as signals may not run properly from the source to your TV. You should check for any loose connections or faulty cables. * Network Issues.
Roku devices like Android TVs need a stable internet connection to run. Poor connectivity will lead to unresponsive results, including Roku freeze and disruption of several factors, such as software updates. * Faulty Remote.
A defective Roku remote can cause your streaming device to freeze. When the Roku remote is not working, it cannot send signals to your TV. This can give you the impression that your Roku device is frozen when, in reality, it is your remote that’s not working. * Incorrect Display Settings.
Roku TVs can become unresponsive after changing features like the screensaver or display modes in the “Auto-Adjust Refresh” settings. * Software & System Bugs.
Bugs can corrupt the Roku system and cause Roku to freeze. Your Roku can get bugs when downloading content from untrusted websites and sources. * Software Update Issues.
Your device may be running on an outdated version of its software. Old Roku software issues have bugs and glitches, which the company fixed during the update. If you don’t update your device’s software, those bugs to affect it. * Damaged Parts.
Hardware damage on the TV where you installed your Roku device may affect its behaviour, thus causing it to freeze. Make sure you get your device checked by qualified personnel or contact customer support. If your Roku displays a black screen, here’s what to do.
## How to Fix a Frozen Roku Device
There are numerous ways to fix an unresponsive Roku TV, including simple DIY solutions with reliable results:
### Method 1. Check the HDMI & Disconnect Other Devices
Checking your HDMI connection is a reliable solution, especially when using different devices with Roku, including consoles.
The first option to fix an HDMI connection is by unplugging and plugging it back on. If you still have a frozen screen, try changing the HDMI ports, as the port you currently connect to may be damaged.
The devices connected to your Roku TV via HDMI may also cause it to freeze due to system overload. So, disconnecting the devices you are not using at the moment is ideal.
Method 2. Check Network Connection
An unstable network connection is one of the leading causes of Roku freezing. Since Roku is unresponsive, you’ll have to use an alternative device like a phone to confirm the internet status.
Here’s how to use a phone to check network status:
* Step 1. Go to “Settings”.
* Step 2. Tap “Network & Internet”.
* Step 3. Select your Wi-Fi connection.
This should be the one set up with the frozen Roku.
* Step 4. Scroll down to check the status and other details.
Your Wi-Fi’s signal strength should be “Good” or “Fair” to work properly on your Roku TV. If it is poor, you should reboot the connection by unplugging and reconnecting the Wi-Fi router to troubleshoot the network and freezing problem.
### Method 3. Restart Roku When It’s Frozen
With the Roku TV frozen, you cannot access the Reset function on its settings. In this case, use the remote to fix the problem.
Here’s how to restart Roku TV when it’s frozen by using a remote:
* Step 1. Press “Home” five consecutive times.
* Step 2. Press the “Up” button once.
* Step 3. Press the “Rewind” button twice.
* Step 4. Press the “Fast forward” twice.
Your Roku will reboot, taking 10-30 seconds to turn on and display the TV menu.
### Method 4. Adjust Refresh Rate Settings on Roku
If the methods above were able to unfreeze your Roku device, but it froze again, perform the technique that worked previously. Then, adjust your Roku’s refresh rate while you can still operate it.
Here’s how to adjust refresh rate settings on Roku:
* Step 1. Press the remote “Home” button.
* Step 2. Access Roku “Settings”.
* Step 3. Scroll to “Display Type”.
* Step 4. Choose your preferred display resolution.
The 1080p and 720p resolutions are the best options for your Roku device as they work with standard internet speeds reducing its freezing.
More on Roku Support.
### Method 5. Perform a Factory Reset
Factory resetting your Roku device can also get rid of the freezing issue.
Since you can’t access your Roku’s factory reset option in the settings, you can use the physical reset button at the side of your device as an alternative.
Note! Performing a factory reset will erase any personalized settings and any account details used on Roku TV.
To perform a factory rest with a frozen Roku device on, press the `Reset` button for 10 seconds to perform the factory reset.
You can read more information about factory reset on Roku devices on Roku Support.
### Method 6. Update Roku
Your Roku device should work smoothly after resetting it. To prevent it from freezing again, make sure to update it to the latest operating system. This will help avoid future freezing issues and any outdated software-related malfunctions.
Here are the steps for updating your Roku TV:
* Step 1. Press the “Home” button on the remote.
* Step 2. Access “Settings > System”.
* Step 3. Scroll and choose “System Updates”.
* Step 4. Choose “Check Now”.
After selecting “Check Now”, your Roku device will show available updates. Click on the update to install it.
Find more information on Roku Support.
### Method 7. Contact Roku Support for Freezing Issue
If all else fails, contacting Roku support should be your next option. They can provide you with a reason why your Roku freezes and how you can fix it.
You can reach the support team via the official Roku website or call 1-888-600-7658. Also, enquire if your device is still within the warranty period to explore free professional services.
## How to Prevent Roku From Freezing
You can prevent Roku from freezing by keeping your Roku devices’ software up-to-date. Roku releases OS updates for its streaming devices regularly. The updates also enhance security, device performance, and bug fixes.
You can confirm if there are updates available by going to: “Settings > System > System Update > Check Now”.
Updating your Roku will not only fix potential bugs but also let you access new features.
## There are Multiple Ways to Fix Roku’s Freezing Issue
It can be frustrating when your Roku suddenly freezes while watching your favourite movie. If you ever experience this, you can solve it by checking the HDMI and internet connections.
Restarting Roku when frozen may involve more complex solutions, such as modifying the device’s settings.
But if nothing works, you can best contact Roku’s customer support, whose lines are available 24/7.
# Onn Roku TV Remote Not Working [Easy Fix]
Date: 2023-10-12
Categories:
Tags:
Remote controls offer convenience by putting control at your fingertips, but what happens when your Onn Roku TV remote is not working? This article will cover why your Onn Roku TV remote is not working and possible fixes.
Let’s dive in!
The main reasons why your Onn Roku TV Remote is not working are drained-out batteries, poor pairing, or obstruction of IR signals. Stuck remote buttons, water spillage, and wear and tear can also cause it to malfunction.
Here’s an in-depth look at possible reasons why your Onn Roku TV remote is not working:
* Depleted Batteries.
The batteries may be weak or drained out, which can cause your remote to stop working. Roku remotes use lithium-ion batteries, which exhaust over time. * Pairing Issues.
Some remotes, such as Roku Voice remotes, need pairing to function. If not paired correctly, it malfunctions. * IR Signal Obstructed.
Remotes depend on infrared (IR) to transmit a signal to the TV. Any obstructions between the Onn Roku remote and the TV block signal transmission. * Stuck Buttons.
Sometimes, buttons get stuck because of dust accumulation inside the remote. The inability to push buttons down affects the remote’s functioning. * Wear and Tear.
Like any other device, remotes are susceptible to wear and tear over time due to continuous usage. The circuit board may accumulate dust, or even electrical contact between the panel and the remote just stops. * Liquids Spillage.
Accidental spilling on the remote can tamper with its functioning, causing it not to work.
## How to Fix Onn Roku TV Remote Not Working
Let’s look at some troubleshooting techniques when your Onn Roku TV remote is not working.
### Method 1. Change Remote Batteries
If it’s been quite some time since changing your remote batteries, they have possibly run out. Slow performance is an indicator of exhausted batteries.
You can change your remote’s batteries by removing the lid on its back and gently propping out the old batteries. Next, install a new pair of batteries and put the lid back on.
### Method 2. Manually Clean the Remote
It is important to clean your remote once in a while. If your Onn Roku TV remote is not working and looks slimy and sticky, it’s time to cleanse it.
Here are steps to manually clean your Onn Roku TV remote:
Note: You will need tweezers, a scalpel, a screwdriver, cotton, and alcohol for this method.
* Step 1. Take out the batteries to access the screws.
* Step 2. Unfasten the screws using the screwdriver.
* Step 3. Remove the front part of the remote using the scalpel.
* Step 4. Clean out the dirt buildup on the front cover, button pad, & circuit board.
* Step 5. Return everything to its original position.
* Step 6. Clean the back cover.
* Step 7. Fasten the screws & insert the batteries.
* Step 8. Cover the battery & test out your remote.
### Method 3. Check for IR Signal Interference
If the Onn Roku remote is not working even with new batteries, check if an obstacle is blocking the IR signal. Infrared signals travel in a straight path until they reach their receiver. If an obstacle is on the way, it can weaken the IR signals, preventing your remote control from sending commands to your device.
Remove obstacles between your remote and TV to create a clear line of sight. Move closer to the TV to rule out issues with the remote.
### Method 4. Check Remote Pairing
A faulty connection may cause your Onn Roku remote to stop working. You can try re-pairing to increase the remote functionality.
There are two ways to re-pair the remote: the `Back` and `Home` buttons or the pairing button.
Method 1. Re-Pair Onn Roku Remote Using the Back & Home Buttons
* Step 1. Remove the batteries from the remote to reset it.
* Step 2. Re-insert the batteries.
* Step 3. Hold the `Back` and `Home` buttons simultaneously.
* Step 4. Wait for the green light to blink & activate Bluetooth pairing.
Once the indicator light flashes green, the remote automatically pairs with your Roku TV.
Method 2. Re-Pair Roku Remote Using the Pairing Button
* Step 1. Remove the batteries to reset the remote.
* Step 2. Look for the `Pairing` button.
* Step 3. Press & hold the pairing button until the indicator light flashes.
* Step 4. Wait for the TV to recognize the remote.
* Step 5. Release the button.
The remote should start working if the pairing is an issue.
Read more about how to set up your Roku Voice remote.
### Method 5. Reset Your Onn Roku TV & Remote
Resetting your Onn Roku TV and remote helps clear any connectivity or responsive problems that may hinder its performance. This applies to most Roku TVs, including when a TCL Roku remote is not working.
These are the steps to reset a Roku remote:
* Step 1. Unplug the Roku device from the power source.
* Step 2. Wait for about 30 seconds for your Roku device to reset.
* Step 3. Plug it back into the power source.
* Step 4. Remove the batteries from the remote to reset it.
* Step 5. Press the pairing button until the indicator light blinks green.
* Step 6. Wait for the green light to start flashing.
The remote should start working after the light stops flashing. Learn more on Roku Support.
### Method 6. Reset Onn Roku TV
Sometimes, your Onn Roku doesn’t work, not because of the remote control but because of the TV. In this case, resetting your Onn Roku TV might help.
Warning: A factory reset will return the TV to its default state.
Here are the steps to power cycle Onn Roku TV without a remote:
* Step 1. Turn on the TV using the power button at the bottom panel.
* Step 2. Locate the reset button at the back of the TV.
* Step 3. Press and hold the reset button for 20 seconds using a pointy object.
* Step 4. Release the button.
### Method 7. Get an HDMI Extender for a Roku Stick
HDMI interference can cause problems with your remote. Your TV may be blocking the Roku device’s infrared sensor. An extender solves this problem by allowing you to place the Roku device somewhere unobstructed.
Roku provides customers with a free HDMI extender. But if you prefer buying a different one, many options are available online. (For example, this one from Amazon)
Here are the steps to install an HDMI extender:
* Step 1. Insert the extender into a HDMI port.
* Step 2. Put your Roku stick at the end of the extender.
* Step 3. Power your receiver booster using the USB router or directly from an outlet.
* Step 4. Plug in one end of the booster into your Roku stick.
## How to Control Your Onn Roku Without Remote
You can control your Onn Roku TV without a remote using the Roku app on your phone or tablet. Your phone and the TV should be connected to the same WiFi network.
Here is how to use the Roku app as a remote control:
### Step 1: Download the Roku App
### Step 2: Open & Sign Into the Roku App
### Step 3: Tap the “Remote” Option at the Bottom of the App
### Step 4: Tap “Devices” to Connect a Device
### Step 5: Click “Connect Now”
### Step 6: Tap on Your Device on the List of Options
### Step 7: Select “Remote” to Open the Control Panel
You can now start controlling your TV using the Roku app.
Read more about how to connect the Roku mobile app to your Roku device over WiFi on the Roku Support page.
## Can You Use Any Remote for Your Onn Roku TV?
Yes, you can use a universal remote on Roku TV. However, only a few remotes are compatible, and each button’s support varies. To use another remote on your Roku TV, locate the key in the Roku code from the universal remote user manual. You can also find the remote code on the official Roku page.
You can find a compatible universal remote on Amazon or at a local tech store near you. However, purchasing another Onn Roku TV remote is the best permanent solution.
## Fix Your Onn Roku TV Remote if It’s Not Working
When your Onn Roku TV remote is not working, it could be because of dirt build-up, exhausted batteries, or signal interference. Replace your batteries, and remove any obstacles that may interfere with the IR or WiFi signal. If possible, open and manually clean the remote.
Remember, using the Roku app from your mobile phone is always an option. Or you can choose to purchase a new remote.
# TCL Roku TV – Lost Remote and Have No WIFI?[✓Solved]
With a smart TV, you can play games, catch up on sports, and watch your favourite shows using streaming services. However, to get the most out of Roku TV, you’ll need an internet connection and the TV’s remote to connect to it. But what if the remote is missing?
Luckily, it is possible to control some smart TVs like TCL Roku TVs without a remote if WiFi is available. But what happens if your TCL Roku TV remote is lost, and there is no WiFi?
Check out TCL Roku TV Won’t Connect to WiFi if you have WIFI but for some reason it doesn’t connect to your Roku TV.
Keep reading to discover techniques for using Roku TV without a remote and WiFi. In this article, we’ll primarily focus on TCL Roku TV. Let’s get right into it!
## Can You Operate TCL Roku TV Without Remote?
Yes, you can operate a TCL Roku TV without a remote using the Roku Mobile App or the TV’s buttons. Gaming consoles like PlayStation 4 and Nintendo Switch can also operate a TCL Roku TV. Android users can use an infrared sensor app to control the TV.
Here’s an in-depth look at how to operate a TCL Roku TV without a remote:
* Using Manual Buttons.
The physical buttons on Roku TV can perform essential functions like scrolling through the settings menu. For instance, the `+` and `-` buttons rise or reduce the volume. * PlayStation 4 or 5.
A PS4 and PS5 can control a TCL Roku TV through HDMI. Once connected and enabled, turning the PS4 / PS5 on automatically powers the TV. * Nintendo Switch.
This method requires having a Nintendo Dock and connecting it to the TCL Roku TV’s display port. Once connected by “Matching Power State”, powering the Nintendo Switch will turn on the TV. * Infrared Sensor.
Since most Android devices have an in-built infrared sensor, you can use an app to control your TCL Roku TV. Many apps are available on the Play Store, such as IR Remote Control, irplus, and Unified Remote. The apps work by sending a signal to the IR blaster of your TV, converted to the IR code sent to it.
## How to Connect TCL Roku TV to WI-FI Without a Remote
A TCL TV with a Roku OS heavily relies on the internet to stream online content. Let’s look at how to connect Roku to WiFi without a remote.
### 1. Operate Using Manual Buttons
You can use the manual buttons to scroll through different settings and connect to WiFi. However, this can be a very tedious process.
You only need to press the `Middle` button once to perform an action. In some TCL models, there is only one button.
Here are the steps to use buttons in establishing a WiFi connection:
Note! This method only works on some Roku TCL TV models.
* Step 1. Press the `Middle` button to turn the TV on.
* Step 2. Access “Settings > Network & Internet > WiFi” using the’ Middle’ button.
* Step 3. Add your WiFi credentials to connect to the internet.
Discover the Top 10 Frequently Asked Questions about TCL Roku TV.
### 2. Connect Through a Mobile Hotspot
You can rely on a mobile hotspot when your TCL Roku TV remote is lost, and there is no WiFi. However, you might incur charges if you don’t have an active mobile date plan.
Here are steps on how to connect Roku to WiFi without a remote through mobile data:
* Step 1. Go to your phone’s “Settings > Portable Hotspot > Setup Portable Hotspot”.
* Step 2. Note & save the SSID and password.
* Step 3. Turn on “Portable Hotspot”.
* Step 4. Download & install the Roku Mobile App on your phone.
* Step 5. Open the TV’s “Settings > Network & Internet” using the Roku app
* Step 6. Turn on “WiFi”.
* Step 7. Select your mobile hotspot from “Available networks”.
### 3. Get an Ethernet Cable + Roku App
Ethernet cables enable faster data transfer compared to wireless connections. You need an ethernet cable, a USB ethernet adapter, and the Roku app on your phone to connect your TV to WiFi without a remote control.
Below are steps to connect TCL Roku TV to the internet using a cable:
* Step 1. Connect the ethernet cable to your router & TCL Roku TV.
You need to use an HDMI to ethernet port adapter if your TV does not have a dedicated ethernet port. (You can get one from Amazon) * Step 2. Using the Roku app, open “Settings” on the TV.
* Step 3. Select “System > USB Media > Auto-Launch”.
* Step 4. Go to “Advanced system settings > Network Connection Reset”.
* Step 5. Select “Reset connection”.
The TV will automatically restart and connect to the ethernet cable (wired connection).
### 4. Change WiFi SSID and Password
Sometimes, your TCL Roku TV might fail to connect to a new network, but remember its previous connection. In this case, you have to change the SSID and password of the current network to match the old one.
Log into your WiFi admin panel and change the name and password under the “Wireless option settings”. The TV should automatically connect to the wireless connection.
In case of any problem, contact your WiFi service provider.
Learn more about How to Find Your Wireless Network Name and Password on TCL TV.
### 5. Use a Universal IR Remote
You can use a universal IR remote if you’ve lost the TCL Roku TV remote and there is no WiFi. However, you need to use specific TCL codes for each remote.
Here are steps on how to use a Universal IR remote:
* Step 1. Press the remote’s `Setup` button until the `Power` button glows red.
* Step 2. Enter the respective TV code.
If the code you tried does not turn your TCL Roku TV on, try another code until you find one that works. Check out the TCL Roku TV Remote Codes.
* Step 3. Wait for the TV to turn on.
* Step 4. Open the Roku TV’s “Settings”.
* Step 5. Go to “Network & Internet”.
* Step 6. Look for the available network.
* Step 7. Enter the password and connect.
### 6. Purchase a New TCL Roku TV Remote
Purchasing another TV remote is a permanent solution when your TCL Roku TV remote is lost and WiFi is not connected to your TV.
You can order a replacement online or from the nearest store. It’s best to research the best stress-free options before purchasing.
Learn more about the Different Types of Remotes Roku Devices Use.
Your TCL Roku TV Remote might not work if it’s out of batteries or if there are obstacles between the IR sensor and the TV blocking the transmission of the signal. In the case of a voice remote, which uses WiFi instead of IR, pairing issues can also cause it to malfunction.
Here is an in-depth look at possible solutions to fix a non-functioning TCL Roku TV remote:
* Depleted Batteries.
Remove the current ones and re-insert them to check if your batteries need to be replaced. If the remote still doesn’t work, it’s time to change the batteries. * Obstacles Between IR Sensor and TV.
Ensure there aren’t any objects blocking the remote or TV light sensor. If possible, move closer to the TV and aim the remote directly at the light panel. * Weak Connection.
Find the pairing button in the battery compartment for voice remotes and re-pair the remote. If the pairing light fails to flash, replace the batteries. Discover how to fix problems with Roku responding slow to Remote.
## Conclusion
Don’t fret if you’ve lost your TCL Roku TV remote and there’s no WiFi connection. One of the ways to go around this is by using the buttons at the bottom of your TV. However, not all TCL Roku TVs have buttons.
In this case, installing the Roku Mobile App on your smartphone is your best bet. With the app, you can use your phone as a hotspot or directly connect an ethernet cable to the TV. For gamers, your console can come in handy.
Either way, the long-term solution might be buying another remote.
# 4 Digit Code for Samsung TV [How to Find It]
Have you recently purchased a universal remote but can’t figure out how to use it on your Samsung TV? Or want to know what is the the 4-digit code on your Samsung TV?
Televisions and remotes come with various codes you can’t possibly decipher on normal occasions. However, there are certain times you’ll need them, like when you’re setting up your TV with a new remote.
To help you understand better, in this article, we explain all about the 4-digit pairing code for Samsung TVs and how to set up a universal remote for a Samsung TV.
## What Is the Purpose of the 4-Digit Code for Samsung TV?
The purpose of the 4-digit code for Samsung TV is to enable your TV to connect with a universal remote. You’ll need a universal remote if you lost your original Samsung remote or it’s already broken. The 4-digit code won’t have any purpose for you if you don’t use a universal remote.
If you bought a universal remote to make your life easier when using your TV, you will need this 4-digit code to pair it. The universal remote will enable you to control basic functions on your TV, such as switching programs or adjusting the volume.
Universal remotes come with different 4-digit codes. There isn’t just one single code that allows you to connect any universal remote with your Samsung TV. If you don’t know where to find it, check out the succeeding sections.
## How to Find the 4-Digit Pairing Code for Samsung TV?
There are several methods to find the 4-digit pairing code for your Samsung TV. Instructions vary depending on the brand of your remote. The easiest way is to look for it in the remote’s manual, but let’s explore all the possibilities you have to find this 4-digit pairing code.
### 1. Automatic Code Search Function
Each universal remote control has its unique automatic code search function. To show you how it works, we’ll use the GE Universal Remote brand, one of the well-known brands in the market.
Here’s how to find the 4-digit pairing code for Samsung TV through an automatic code search:
* Step 1. Turn on the GE remote and your Samsung TV.
* Step 2. Point the GE remote to the Samsung TV.
* Step 3. Press and hold the “Setup” button on your remote.
Release the button when the light indicator on the remote turns red.
* Step 4. Press and release the “TV” button on your GE remote.
If successful, the red light on the GE remote will blink once and remain on.
* Step 5. Press and release the “Power” button on your remote.
This will enable the automatic search code. The red light indicator should blink on and off until it finds the code. At that moment, it will stop flashing and remain on.
* Step 6. Wait for your Samsung TV to turn off
If this happens, it means that your GE remote found a compatible code with your Samsung TV. If your Samsung TV doesn’t shut down, all you have to do is repeat step .
* Step 7. Turn on your Samsung TV manually.
* Step 8. Press and release the “Volume +” button pointing to the Samsung TV.
The red indicator light should blink once and remain on. Your Samsung TV will turn off if a compatible code is found. If not, continue pressing and releasing the “Volume +” button until your Samsung TV turns off.
* Step 9. Press and release the “TV” button on the remote to store the code.
Note: If the above steps don’t work, even if you have repeated them multiple times, you can contact GE support
### 2. Check Remote’s Manual
Remote controls come with a manual to use when setting it up. These manuals have a 4-digit code to connect with your Samsung TV.
The list of codes can be too long but is usually alphabetized. All you have to do is search for the word Samsung, and then you can try setting up your remote using the code/s listed.
### 3. Search the Code on the Internet
If you lost or already threw away the manual that came with your remote, it’s alright: the Internet has got you covered. Some universal remote brands provide their manuals online. You can easily check the 4-digit code to connect your remote to your Samsung TV successfully.
Samsung TV remote codes are also listed in some online articles, online databases, and forums. All you have to do is find a reliable source and use the 4-digit code on your Samsung TV.
You can also find in the internet remote code finders. To use them, you just need to find the “revision number” of your remote, which is on a sticker inside the battery cartridge. Then select your remote brand and the type of device you are trying to connect to (a TV in your case)
If none of the above methods work, the last thing you can do is contact the manufacturer or customer support. Again, this will depend on the brand of the remote you purchased.
## List of Common Samsung TV Remote Codes
There are hundreds of remote codes available to enable connection with your Samsung TV. The 4-digit code depends on the brand.
To give you an idea, this section provides a list of universal remote codes for Samsung TVs. It’s also categorized per brand for your convenience.
Here’s a list of Samsung TV remote codes:
Remote Brand | 4-Digit Codes for Samsung TV |
| --- | --- |
GE Universal Remote | 3301, 3471, 3561, 0251, 5471, 5521, 5791, 5801, 2071, 2141, 2721, 2741, 2961, 0001, 0101, 0261, 0331, 0351, 0531, 0571, 0711, 0781, 1191, 3321, 1221, 1311, 1501, 1911, 4961, 4011, 4941 |
ADB STB Remote | 0019, 0009, 0702, 00150, 0092, 0072, 0179, 0208, 0519, 0156, 0163, 0625, 0226, 0812, 0817, 0821, 1060, 0618, 0644, 0056, 0060, 0587, 0037, 0178, 0030, 0556, 774, 0093, 0217, 0448, 0329, 0090, 0032, 0216, 0290, 0154, 0747, 0482, 0370, 0264 |
Amino Remote | 2448 |
RCA Universal Remote | 1004, 1046, 1056,1065, 1069, 1078, 1083, 1207, 1009, 1012, 1013, 1014, 1015, 1025, 1102, 1103, 1104, 1123, 1124, 1194, 1205 |
Philipps Universal Remote | 0609, 0102, 5801, 5791, 3301, 0110, 0437, 0309, 0209, 0512, 0302, 0502, 0112, 0818 0895, 0002, 0802, 0103, 0012, 0212 |
Note: There are tons of universal remotes available in the market, and they come with unique 4-digit codes. This is just a brief selection.
## How to Set Up Your Universal Remote on Samsung TV?
Each universal remote has its unique method of setting up with a Samsung TV, and it’s impossible to cover all of them here. To show you how it works, we’ll use the GE Universal Remote brand as an example.
Here are the steps to set up your GE universal remote on Samsung TV:
### Step 1: Find Your Code List
You can find this on the manual your remote control came with. If you lost it, you can check the PDF file that lists out all the codes online or check the manufacturer’s website.
The 4-digit codes listed for Samsung TV are 5791, 5801, and 3301. These are organized in order of the most common codes.
Note:GE Univeral Remote 4-digit Code Finder.
### Step 2: Hold the “Setup” Button
Do this until the power button turns red.
### Step 3: Press “TV”
### Step 4: Enter Your 4-digit Code
Enter the first 4-digit code for Samsung TV: 5791. If you’ve successfully synced the remote to your Samsung TV, the red light will turn off.
### Step 5: Test the Remote
Check if all the buttons work properly so you won’t encounter issues later on.
If it doesn’t work, you can repeat steps 2 to 4 and use the other 4-digit codes for Samsung TV.
Here’s how you can Control Your Samsung TV with Third-Party Remotes.
## Why Won’t My Universal Remote Connect to my Samsung TV?
Your universal remote won’t connect to your Samsung TV for the following reasons:
* Battery Issues.
You might have inserted the batteries incorrectly. Ensure that the batteries face their respective polarities (positive and negative). Or, if you’re using old batteries, just buy new ones. * Infrared Sensor Issues.
Remove any obstructions between your Samsung TV and universal remote, such as electronics and radios, that may disrupt the signal. If infrared sensors are blocked, your universal remote won’t connect to your Samsung TV. * Pairing Issues.
Ensure that you’ve properly paired your universal remote to your Samsung TV. It won’t work if your remote isn’t compatible with your TV.
You can try troubleshooting the issues on your universal remote if the solution is simple, like changing your batteries. Other times though, your remote might be defective. If this is the case, you can check with the manufacturer about the steps you should take to replace it.
## How Do You Get Your Samsung TV to Recognize Your Remote?
To get your Samsung TV to recognize your remote, try these solutions:
* Check Network Connection.
Sometimes, your remote and Samsung TV require an Internet connection to work. Check if your TV is connected to the Wi-Fi to recognize the remote. * Restart the Remote and Samsung TV.
Restarting your devices might fix the issue. Turn off the remote and Samsung TV using their respective power buttons. Wait for a few moments before you turn them back on. * Inspect if the Remote Has Physical Damage.
You might not have noticed that your remote is broken. If this is the case, it won’t be recognized by your Samsung TV anymore. You can replace it with a universal remote compatible with your Samsung TV. * Pair the Devices Once More.
Press and hold the `Return` and `Play/Pause` buttons of your Samsung TV remote simultaneously. Do this for at least five seconds or until the pairing confirmation appears on the screen. This will enable re-pairing between the two devices. Learn What to do when the Samsung TV Remote Control is not working.
## Use the Remote’s 4-digit Code to Pair With Your Samsung TV
If you’ve successfully paired your remote to the Samsung TV using the 4-digit pairing code, congratulations, you’ve just made your life easier! Now, if you ever need a new remote control, you can easily use the 4-digit code to pair it seamlessly.
Universal remotes can also be used for other TV brands, so if you ever make a switch, you can keep the remote control.
Have you ever been in a situation where you forgot your Chromebook charger but desperately needed to have it charged? You’re not alone. If you are one of those wondering how to charge your Chromebook without its charger, this article is for you.
We’ll discuss if you can charge a Chromebook with a phone charger or with other unconventional methods. This way, you won’t have to stress over not having enough power on your Chromebook when you don’t have the charger.
## Can You Charge a Chromebook Without Its Charger?
Yes, you can charge a Chromebook without its charger using a power bank, a smartphone, or a car charger. However, these methods may not work exactly like your original charger, and some might be slower than others. We’ll get into the details later.
## Can You Charge a Chromebook With a Phone Charger?
Yes, you can charge a Chromebook with a phone charger. You’ll need a USB Type-C charger for it to work. You might also have to charge it overnight for a decent battery percentage. This is because phone chargers usually have lower watts compared to Chromebook chargers.
If you want your Chromebook to charge decently, purchase a phone charger with higher watts. Typically, Chromebooks have a battery life of 9 hours and 58 minutes and require a 45W charger; some specific models can even use 65W chargers.
You can find a wide variety of USB Type-C chargers at Amazon.
## How to Charge Chromebook Without Its Charger
A few methods are available to charge your Chromebook without its charger. You can try these if you forget or misplace your computer’s charger.
Here are some ways to charge a Chromebook without its charger.
### 1. Using Your Phone
A USB-C port is a feature of modern cell phones; some even have larger batteries than Chromebooks. You can use your phone to charge your Chromebook if it has adequate battery life. A USB-C to USB-C cable that works with both devices is required. Your phone should also support two-way charging.
Since Chromebooks come from various manufacturers, we’ll use a Chromebook Pixel (2015) as an example to show you how it works.
These are the steps to charge your Chromebook using your phone:
* Step 1. Connect your Chromebook to your phone.
Your phone should have a USB Type-C port for this to work. * Step 2. Go to “Time > Settings > Device > Power”.
You’ll find the time at the bottom right of the screen. * Step 3. Select the port connected to your phone.
Once done, your phone will start charging your Chromebook slowly.
### 2. Using a Power Bank
You can also charge your Chromebook using a power bank. Before trying this method, ensure your power bank has an AC outlet and that it has at least a 10,000mAh battery. This will help you provide enough charge when you charge your Chromebook.
This PowerBak from Anker with 25,600mAh would be perfect.
All you have to do is connect your power bank’s AC outlet to your Chromebook and then turn on the power bank. If you like, you can also monitor the progress of charging occasionally. Charging through this method might not be as fast as you want.
### 3. Using Another Charger
You can use other chargers, such as a phone, universal, or car charger, to charge your Chromebook without its charger.
Here’s how you can charge a Chromebook by using another charger:
* Phone Charger.
Your phone charger must be a USB-C cable to charge a Chromebook. Plug it in and let it charge. (This charger in Amazon should do it) * Universal Charger.
Universal chargers work with a variety of gadgets, including Chromebooks. Plug your Chromebook and the universal charger into an electrical outlet to start charging your Chromebook. * Car Charger.
Most automobile chargers have a USB port, and you may charge your Chromebook with a USB-C to USB-C or USB-C to USB-A connection. You can find them on Amazon at very reasonable prices.
Since USB Type-C is a universal standard and Chromebook has it, it’s relatively easy to find ways to charge your Chromebook without the charger. You just need the right cables and power sources to make it work.
## Can You Charge a Chromebook With an HDMI Cable?
Under normal circumstances, you can’t charge a Chromebook with an HDMI cable. You can only use an HDMI cable to Charge if your Chromebook model supports the MHL technology. However, only old Chromebook models from the South Korean market support MHL. It’s also almost a dead standard superseded by USB-Type C.
A Chromebook doesn’t usually have a circuit that sends power from the HDMI connector to the battery, as the most common port is USB-Type C.
The HDMI port in Chromebook devices is just used to connect it to an external monitor.
The HDMI port in Chromebooks is an output port. So you won’t be able to use it to use your Chromebook as a monitor.
## What Kind of Charger Do Chromebooks Use?
There are different types of chargers for Chromebooks. It will depend on the Chromebook model you bought. Several manufacturers produce Chromebooks, so you can just choose based on that.
Here are some kinds of chargers that Chromebooks use:
Brand | Charger |
| --- | --- |
HP Chromebook | HP USB-C 65W Laptop Charger |
Dell Chromebook | Dell 65W AC Adapter |
ASUS Chromebooks | ASUS 45W N45W-C1 Adapter |
Galaxy Chromebook 2 | Superer 65W AC Charger |
You can also buy power banks to charge your Chromebook. Just check if the mAh (milliampere-hours) is compatible with your Chromebook before you buy it.
Find out what chargers and accessories work with Chromebook.
## How to Force Charge Your Chromebook
If your Chromebook won’t charge and you need to fix it, you must first ensure that the charger is plugged in correctly and that the outlet is operational before attempting to force-charge it. If both are okay and it still won’t charge, you might need to try different approaches.
Here’s how you can force charge your Chromebook:
To restart your Chromebook, power it off through the “power” icon. Then, wait for a few moments before you turn it back on. Once done, try charging your Chromebook again.
In some cases, all you need to do is restart your device to force-charge it. Some functions also restart when you do this.
### Method 2: Perform a Hard Reset
Here are the steps to perform a hard reset on your Chromebook:
* Step 1. Hold the “Power button + Refresh” keys for 10 seconds.
* Step 2. Try charging your Chromebook again.
* Step 3. Check if the charging indicator light turns on.
If it doesn’t work, repeat steps 1 and 2.
* Step 4. Unplug the charger from the Chromebook and power outlet.
This step is optional. You should only do this if the Chromebook still won’t charge after the second hard reset.
* Step 5. Plug the charger back.
Plug it into the power outlet first, then into the Chromebook. * Step 6. Charge your Chromebook for one hour.
Do this if the above steps are successful.
### Method 3: Check the Battery
If none of the aforementioned approaches work, Chrome Diagnostics can be used to determine whether the battery needs to be changed.
Here are the steps to check the battery by running Chrome Diagnostics:
* Step 1. Select the time at the bottom-right of the desktop.
* Step 2. Go to “Settings” and then click the gear-shaped icon.
* Step 3. Choose “About Chrome OS > Diagnostics > System”.
* Step 4. Select “Run Discharge test” in the “Battery” section.
You’ll see if your battery has serious issues when you run the diagnostics. Force charging might be bad for your Chromebook’s battery if damaged.
If none of these solutions worked, you should contact the manufacturer to have it fixed.
Check out other Troubleshooting Tips for When You’re Having Charging Issues with Your Chromebook.
## Charge Your Chromebook Using Other Methods
Chromebooks are versatile when it comes to charging, which is why you can charge them without their charger. They’ve been around for over ten years, and many schools still use them.
Google recently announced plans to extend the lifespan of Chromebooks for up to 10 years – a valuable treat for students worldwide.
Since you now know how to charge your Chromebook using other methods, you will no longer be frustrated. Just grab your phone or car charger and wait for your Chromebook to charge.
## IPhone 14 Pro Overheating [ ✓ Do This to Fix It ]
## Hulu No Longer Supported on Samsung TV [✓Quick Solution]
## Do Roku TVs Have Bluetooth? [Here’s the Truth]
## Hisense Roku TV Is Not Turning On? [Here’s the Fix]
## How to Fix Samsung TV White Spots
## Phone Icon Missing on iPhone [Easy Way to Bring It Back]
## Fix Nintendo Switch Showing Black Screen [Easy Way]
## Chromebook Keeps Disconnecting From Wi-Fi [Easy Fix]
## Do You Need a Roku With a Smart TV? [The Truth]
## Does Roku Has Steam Link? [Play PC Games on Roku]
### Mediavine Programmatic Advertising (Ver 1.1)
The Website works with Mediavine to manage third-party interest-based advertising appearing on the Website. Mediavine serves content and advertisements when you visit the Website, which may use first and third-party cookies. A cookie is a small text file which is sent to your computer or mobile device (referred to in this policy as a “device”) by the web server so that a website can remember some information about your browsing activity on the Website.
First party cookies are created by the website that you are visiting. A third-party cookie is frequently used in behavioral advertising and analytics and is created by a domain other than the website you are visiting. Third-party cookies, tags, pixels, beacons and other similar technologies (collectively, “Tags”) may be placed on the Website to monitor interaction with advertising content and to target and optimize advertising. Each internet browser has functionality so that you can block both first and third-party cookies and clear your browser’s cache. The "help" feature of the menu bar on most browsers will tell you how to stop accepting new cookies, how to receive notification of new cookies, how to disable existing cookies and how to clear your browser’s cache. For more information about cookies and how to disable them, you can consult the information at All About Cookies.
Without cookies you may not be able to take full advantage of the Website content and features. Please note that rejecting cookies does not mean that you will no longer see ads when you visit our Site. In the event you opt-out, you will still see non-personalized advertisements on the Website.
The Website collects the following data using a cookie when serving personalized ads:
* IP Address
* Operating System type
* Operating System version
* Device Type
* Language of the website
* Web browser type
* Email (in hashed form)
Mediavine Partners (companies listed below with whom Mediavine shares data) may also use this data to link to other end user information the partner has independently collected to deliver targeted advertisements. Mediavine Partners may also separately collect data about end users from other sources, such as advertising IDs or pixels, and link that data to data collected from Mediavine publishers in order to provide interest-based advertising across your online experience, including devices, browsers and apps. This data includes usage data, cookie information, device information, information about interactions between users and advertisements and websites, geolocation data, traffic data, and information about a visitor’s referral source to a particular website. Mediavine Partners may also create unique IDs to create audience segments, which are used to provide targeted advertising.
If you would like more information about this practice and to know your choices to opt-in or opt-out of this data collection, please visit National Advertising Initiative opt out page. You may also visit Digital Advertising Alliance website and Network Advertising Initiative website to learn more information about interest-based advertising. You may download the AppChoices app at Digital Advertising Alliance’s AppChoices app to opt out in connection with mobile apps, or use the platform controls on your mobile device to opt out.
For specific information about Mediavine Partners, the data each collects and their data collection and privacy policies, please visit Mediavine Partners.
Nope! You can not. You need to purchase them in order to try them.
But remember you have a 30 days money back guarantee as long as you do not generate any activation key. Pleas read the fullPage.js Refund Terms of the license.
Once you purchase the extension you'll get a confirmation message on the screen in which you'll be able to download the file.
Additionally, on the purchase confirmation email you'll see a link to download the extension file. (For example: fullpage.scrollHorizontally.min.js)
You can get the fullpage.extensions.min.js file from the github project.
Sure you can! You'll find a link to generate the invoice in the purchase confirmation email.
More info in here.
If you have any question or problem regarding invoices and billing, please contact <EMAIL> to reach the company that deals with the payment.
No activation key is necessary for localhost and 127.0.0.1 domains.
Any other staging domains will require a license (Professional or Business) that allows to generate a new key for those.
Note that you can still use the Hobby license if the different environments are subdomains of the same domain. For example: apple.com and dev.apple.com
Sure! As detailed on the extensions page, there's 30-days money back guaranteed for any purchase under the conditions detailed in the License Agreement: read the fullPage.js Refund Terms.
Notice that you won't be elegible for a refund if you generated an activation key for your extension.
That means you are not using the right key for your extension!
Make sure to read how to activate a fullPage.js extension and how to use fullPage.js extensions.Notice you'll have replace (not add!) the usual fullPage.js file (fullpage.js or fullpage.min.js) for the file fullpage.extensions.min.js.
Sure! It is in fact very easy!
Sure! But you'll have to purchase the Business license for that. The Hobby or Professional licenses use a different activation key per domain.
The activation key was sent to the email used for the purchase when you generated it for a domain. If it is not on the inbox, please look into the SPAM folder.
The most probable thing is that you a missing some of the required steps.
Make sure to read in the documentation how to use your extension as well as how to activate a fullPage.js extension and how to use fullPage.js extensions.
Notice you'll have replace (not add!) the usual fullPage.js file (fullpage.js or fullpage.min.js) for the file fullpage.extensions.min.js.
I do! As long as:
Contact me for more information.
If you are having problems with fullPage.js, please make sure to first check the documentation and check the fullPage.js Help Center to look for a solution.
If you do not find a solution in there and it is a fullPage.js, multiScroll.js or pagePiling.js issue, please make sure to create an isolated reproduction of the issue in jsfiddle, codepen or your own hosting. It should not contain CSS or JS files external to my libraries (unless strictly required). It should also use empty sections if possible to have as less HTML code as possible.
Then, please proceed to create an issue in the github project's forum:
# PARALLAX
Use `parallax: true`. Made easy.
Extension issues are supported at no extra cost!
Put not limits to yourself
Use parallax even without scroll bar!
Totally configurable.
For sections & slides!
30 days money back guarratee if no domain was activated.
Join thoudands of other developers who trusted fullPage.js extensions!
If fullPage.js is known for one thing it's for its great documentation!
Take it to the next level!
# Scroll Horizontally
First slide Second slide Third slide Fourth slide More extensions Download extension Scroll Horizontally Use `scrollHorizontally: true` and enable the mouse wheel horizontal scroll for slides. Slide 1.2 Slide 1.3 Slide%201 Slide%202 Slide%203 Section 2 Section 3 Slide 3.2 Slide 3.3 Slide 3.4 Slide%201 Slide%202 Slide%203 Slide%204 Section 4 Made with fullPage.js
# Scroll Overflow Reset
Use `scrollOverflowReset: true` to reset the scrollbar after leaving the section or slide and always show the start of the content on section load.
# Drag And Move
First slide Second slide Third slide Fourth slide autoScrolling:true autoscrolling:false scrollBar:true More extensions Download extension Drag And Move Use `dragAndMove: true` or the options `fingersonly`, `mouseonly`, `horizontal`, `vertical Slide 1.2 Slide 1.3 Slide%201 Slide%202 Slide%203 Section 2 Slide 3.1 Slide 3.2 Slide 3.3 Slide 3.4 Slide%201 Slide%202 Slide%203 Slide%204 Section 4 Made with fullPage.js - Not compatible with continuousVertical:true or extension Continuous Horizontal - Sections/slides with scrollOverflow:true or any scrollable element won't be scrolled by dragging, only by mouse wheel.
This section is using `data-percentage="80"`.
If you don't want to center it vertically, you could use `data-centered="false"`. See section 4.
This section is using `data-percentage="80"` and `data-centered="false"`. It won't be vertically centered in the viewport and will show part of the previous or next section depending on the direction of the previous scroll.
# Fading Effect
It can be turned on for sections and slides or just for one of them.
Use: fadingEffect:true, fadingEffect:'slides' or fadingEffect:'sections'
# Responsive Slides
This example will turn horizontal slides into vertical sections under 900px width as defined in the option `responsiveWidth`
You can fire yourself the change by calling the functions `toSections` and `toSlides`
# Continuous Horizontal
First slide Second slide Third slide Fourth slide More extensions Download extension Continuous Horizontal Use the option: `continuousHorizontal:true` Slide 1.2 Slide 1.3 Slide%201 Slide%202 Slide%203 Section 2 Section 3 Slide 3.2 Slide 3.3 Slide 3.4 Slide%201 Slide%202 Slide%203 Slide%204 Section 4 Made with fullPage.js
# Interlocked Slides
Using the option `interlockedSlides: [1,3].`
Want to apply it to all sections? Use `interlockedSlides: true.`
# Reset Sliders
First slide Second slide Third slide Fourth slide More extensions Download extension Reset Sliders Use `resetSliders:true` Slide 1.2 Slide 1.3 Slide%201 Slide%202 Slide%203 Section 2 Section 3 Slide 3.2 Slide 3.3 Slide 3.4 Slide%201 Slide%202 Slide%203 Slide%204 Section 4 Made with fullPage.js
This Software License Agreement (the “Agreement”) is between Alvaro Trigo (“Alvaro Trigo”) and You (including your agents and affiliates), a For Up To 1 Website licensee of Alvaro Trigo's software.
This license grants you to use fullPage.js Extensions for up to 1 Website. It is not exclusive — others companies and developers can use fullPage.js Extensions — and is non-transferable — you cannot transfer this license to another company or person.
Subject to the terms of this Agreement, Alvaro Trigo grants to You a revocable, non-exclusive, non-transferable license: (i) for up to 1 Website to use the Software to create Modifications and Applications; (ii) for You to distribute the Software and/or Modifications to an unlimited number of End Users solely as integrated into the Applications within 1 single Website; and (iii) for End Users to use the Software as incorporated into Your Applications in accordance with the terms of this Agreement.
This license grants you or multiple other Developers to use fullpage.js Extensions for an unlimited amount of Applications within 1 single Website. It is not exclusive — others companies and developers can use fullpage.js Extensions — and is non-transferable — you cannot transfer this license to another company.
This license is for use of fullpage.js Extensions, not ownership of its intellectual property. Alvaro Trigo continues to own fullpage.js Extensions.
You cannot distribute the Software within a free Application.
You cannot
You cannot create something very similar to fullpage.js Extensions, like a super-fullpage.js Extensions. Make something unique.
You may not distribute the Software or Modifications except as included within Your Application.
This Agreement applies for as long as you use fullpage.js Extensions.
Alvaro Trigo shall have the right to terminate this Agreement and the license granted hereunder immediately if You breach any of the material terms of this Agreement, and You fail to cure such material breach within thirty (30) days of receipt of notice from Alvaro Trigo. Upon termination of this Agreement, all licenses granted to You in this Agreement shall terminate automatically and You shall immediately cease use and distribution of the Software.
We can end this Agreement if you breach any terms, and do not resolve the breach after 30 days after notice from Alvaro Trigo. After termination, you must stop using and distributing fullpage.js Extensions.
After termination, your end users may continue to use fullpage.js Extensions in products that have already been distributed to them.
If someone sues Al<NAME>rigo as a result of your use of fullpage.js Extensions, you are responsible for any resulting cost to Alvaro Trigo.
All payments under this Agreement are due to Alvaro Trigo upon Your purchase of a license to the Software.
In order to ask for a refund, Licensee shall Contact Us via our website. As soon as the request is received, We will evaluate it and if you are eligible for a refund we will initiate a full refund of the purchase within 15 working days.
Once we initiate the refund you will get a confirmation email and this agreement is terminated. You shall remove, delete or otherwise destroy any material that you have received, copied or otherwise obtained.
In order to ask for a refund contact me.
During the term of this agreement, Licensee who uses a license with technical support included has access to the Software's online support services via email, which means that Licensee will get answers to technical questions within one (1) week. If Licensee benefits of Premium Support then he will get answers within three (3) business days and issues reported by him will have higher priority.
An isolated reproduction might be required upon request. This is, a reproduction of the scenario with the less amount of external code possible.
Support is provided by mail. Support does not include questions related to other programming languages, frameworks or CMS (such as Webflow, WordPress, Wix etc.) and only provides questions regading the Software's implementation, issues or bugs.
Marketing. You agree to Alvaro Trigo’s use of Your name, trade name, and trademark, for use in Alvaro Trigo’s marketing materials and its website, solely to identify you as a customer of Alvaro Trigo.
# Расширения
Требуются какие-либо дополнительные функции? Fullpage.js предлагает расширения, которые вы можете использовать для совершенствования его и так потрясающих функциональных возможностей!
Получать уведомления о новых расширениях или актуальных сообщениях fullpage.js
Including future ones!
🎉 Get extensions pack!
Только если код активации не был создан. Читать условия возмещения денежных средств за лицензию
Возврат денежных средств в течение 30 дней гарантирован для любой покупки на условиях, описанных в Лицензионном соглашении: Условия возмещения денежных средств fullPage.js.
Все лицензии включают бесплатные обновления для приобретённой лицензии, и покупатели получают уведомления и новых версиях по электронной почте.
Ключ активации не требуется для локального хоста и 127.0.0.1. Any other staging domains will require a license (Professional or Business) that allows to generate a new key for those.
# Extensiones
Necesitas funcionalidad extra? Fullpage.js provee extensiones que puedes usar para mejorar su ya increible funcionamiento!
Mantente informado de nuevas extensiones o de anuncios relevantas de fullPage.js.
Including future ones!
🎉 Get extensions pack!
Desplaza el contenido de la diaposiva o sección con scrollOverflow a la posición de inicio al desplazarse a otra sección. De este modo estará siempre al principio.
Permite desplazarse a través de secciones y diapositivas pinchando con el ratón y arrastrando así como deslizando los dedos en dispositivos táctiles.
Permite usar secciones que no son a pantalla completa sino basadas en porcentajes. Ideal para mostrar que hay más contenido en la web dejando ver parte de las secciones contiguas.
Provee un efecto de desvanecimiento para secciones verticales y diapositivas horizontales o para cualquiera de ellas por separado. (horizontal o vertical)
Permite convertir diapositivas horizontales en verticales en modo responsive o usando una función.
Provee el movimiento horizontal infinito para todas las diapositivas.
Provee una forma de mover todas las diapositivas (o las de determinadas secciones) de forma sincronizada al mover las de la sección actual.
Provee desplazamiento horizontal para diapositivas usando la ruleta del ratón o el trackpad. Ideal para contar historias!
Fuerza a que fullPage.js reinicie todas las diapositivas desplazarse a otra sección. Así cuando el usuario vuelve atrás verá siempre la primera diapositiva del carousel.
30 días de reembolso asegurados para cualquier compra bajo las condiciones detalladas en el acuerdo de licencia: Términos de reembolso de fullPage.js.
Todas las licencias incluyen actualizaciones gratuitas y los compradores serán notificados por email de nuevas versiones.
No se necesta clave de activación para `localhost` y `127.0.0.1`. Cualquier otro dominio de desarrollo requerirá una licencia (Profesional or Business) que permita generar nuevas claves de activación para ellos.
# Introduction
# fullPage.js4.0.20
English | Español | Français | Pусский | 中文 | 한국어 | Português Brasileiro
Available for Vue, React and Angular.
| Created by @imac2
* Demo online | Codepen
* Wordpress plugin for Gutenberg and WordPress plugin for Elementor
* Wordpress theme
* fullpage.js Extensions
* Frequently Answered Questions
* [Migration from fullPage v3 to fullpage v4]
A simple and easy to use library that creates fullscreen scrolling websites (also known as single page websites or onepage sites) and adds landscape sliders inside the sections of the site.
## IntroductionIntroduction
Introduction
Suggestion are more than welcome, not only for feature requests but also for coding style improvements. Let's make this a great library to make people's lives easier!
## CompatibilityCompatibility
Compatibility
fullPage.js is fully functional on all modern browsers and with IE 11. If you need to support IE < 11 consider using fullPage.js v3. It also provides touch support for mobile phones, tablets and touch screen computers.
Special thanks to Browserstack for supporting fullpage.js.
## LicenseLicense
License
### Commercial licenseCommercial license
Commercial license
If you want to use fullPage to develop non open sourced sites, themes, projects, and applications, the Commercial license is the appropriate license. With this option, your source code is kept proprietary. Which means, you won't have to change your whole application source code to an open source license. [Purchase a Fullpage Commercial License]
### Open source licenseOpen source license
Open source license
If you are creating an open source application under a license compatible with the GNU GPL license v3, you may use fullPage under the terms of the GPLv3.
You will have to provide a prominent notice that fullPage.js is in use. The credit comments in the JavaScript and CSS files should be kept intact (even after combination or minification).
Read more about fullPage's license.
## UsageUsage
Usage
As you can see in the example files, you will need to include:
* The JavaScript file
`fullpage.js` (or its minified version `fullpage.min.js` ) *
The css file
`fullpage.css`
Optionally, when using
`css3:false` , you can add the easings file in case you want to use other easing effects apart from the one included in the library ( `easeInOutCubic` ).
### Install using bower or npmInstall using bower or npm
Install using bower or npm
Optionally, you can install fullPage.js with bower or npm if you prefer:
Terminal:
### Including files:Including files:
Including files:
```
Copy <link rel="stylesheet" type="text/css" href="fullpage.css" /<!-- This following line is optional. Only necessary if you use the option css3:false and you want to use other easing effects rather than "easeInOutCubic". -->
<script src="vendors/easings.min.js"></script<script type="text/javascript" src="fullpage.js"></script>
```
Using Webpack, Browserify or Require.js? Check how to use fullPage.js with module loaders.
### Optional use of CDNOptional use of CDN
Optional use of CDN
If you prefer to use a CDN to load the needed files, fullPage.js is in CDNJS: https://cdnjs.com/libraries/fullPage.js
### Required HTML structureRequired HTML structure
Required HTML structure Start your HTML document with the compulsory HTML DOCTYPE declaration on the 1st line of your HTML code. You might have troubles with sections heights otherwise. The examples provided use HTML 5 doctype `<!DOCTYPE html>` . Each section will be defined with an element containing the `section` class.
The active section by default will be the first section, which is taken as the home page. Sections should be placed inside a wrapper ( `<div id="fullpage">` in this case). The wrapper can not be the `body` element.
```
Copy <div id="fullpage">
<div class="section">Some section</div>
<div class="section">Some section</div>
<div class="section">Some section</div>
<div class="section">Some section</div>
</div>
```
If you want to define a different starting point rather than the first section or the first slide of a section, just add the class `active` to the section and slide you want to load first.
```
Copy <div class="section active">Some section</div>
```
In order to create a landscape slider within a section, each slide will be defined by default with an element containing the `slide` class:
```
Copy <div class="section">
<div class="slide"> Slide 1 </div>
<div class="slide"> Slide 2 </div>
<div class="slide"> Slide 3 </div>
<div class="slide"> Slide 4 </div>
</div>
```
You can see a fully working example of the HTML structure in the `simple.html` file.
### InitializationInitialization
Initialization
# Initialization with Vanilla JavascriptInitialization with Vanilla Javascript
Initialization with Vanilla Javascript All you need to do is call fullPage.js before the closing `</body>` tag.
```
Copy new fullpage('#fullpage', {
//options here
autoScrolling:true,
scrollHorizontally: true
});
```
# Initialization with jQueryInitialization with jQuery
Initialization with jQuery
You can use fullpage.js as a jQuery plugin if you want to!
```
Copy $(document).ready(function() {
$('#fullpage').fullpage({
//options here
autoScrolling:true,
scrollHorizontally: true
});
//methods
$.fn.fullpage.setAllowScrolling(false);
});
```
Functions and methods can still be called in the jQuery way, as in fullPage.js v2.X.
# Vanilla JS example with all optionsVanilla JS example with all options
Vanilla JS example with all options
A more complex initialization with all options set could look like this:
```
Copy
var myFullpage = new fullpage('#fullpage', {
// Navigation
menu: '#menu',
lockAnchors: false,
anchors:['firstPage', 'secondPage'],
navigation: false,
navigationPosition: 'right',
navigationTooltips: ['firstSlide', 'secondSlide'],
showActiveTooltip: false,
slidesNavigation: false,
slidesNavPosition: 'bottom',
// Scrolling
css3: true,
scrollingSpeed: 700,
autoScrolling: true,
fitToSection: true,
fitToSectionDelay: 600,
scrollBar: false,
easing: 'easeInOutCubic',
easingcss3: 'ease',
loopBottom: false,
loopTop: false,
loopHorizontal: true,
continuousVertical: false,
continuousHorizontal: false,
scrollHorizontally: false,
interlockedSlides: false,
dragAndMove: false,
offsetSections: false,
resetSliders: false,
fadingEffect: false,
normalScrollElements: '#element1, .element2',
scrollOverflow: true,
scrollOverflowMacStyle: false,
scrollOverflowReset: false,
touchSensitivity: 15,
bigSectionsDestination: null,
// Accessibility
keyboardScrolling: true,
animateAnchor: true,
recordHistory: true,
// Design
controlArrows: true,
controlArrowsHTML: [
'<div class="fp-arrow"></div>',
'<div class="fp-arrow"></div>'
],
verticalCentered: true,
sectionsColor : ['#ccc', '#fff'],
paddingTop: '3em',
paddingBottom: '10px',
fixedElements: '#header, .footer',
responsiveWidth: 0,
responsiveHeight: 0,
responsiveSlides: false,
parallax: false,
parallaxOptions: {type: 'reveal', percentage: 62, property: 'translate'},
dropEffect: false,
dropEffectOptions: { speed: 2300, color: '#F82F4D', zIndex: 9999},
waterEffect: false,
waterEffectOptions: { animateContent: true, animateOnMouseMove: true},
cards: false,
cardsOptions: {perspective: 100, fadeContent: true, fadeBackground: true},
// Custom selectors
sectionSelector: '.section',
slideSelector: '.slide',
lazyLoading: true,
observer: true,
credits: { enabled: true, label: 'Made with fullPage.js', position: 'right'},
// Events
beforeLeave: function(origin, destination, direction, trigger){},
onLeave: function(origin, destination, direction, trigger){},
afterLoad: function(origin, destination, direction, trigger){},
afterRender: function(){},
afterResize: function(width, height){},
afterReBuild: function(){},
afterResponsive: function(isResponsive){},
afterSlideLoad: function(section, origin, destination, direction, trigger){},
onSlideLeave: function(section, origin, destination, direction, trigger){},
onScrollOverflow: function(section, slide, position, direction){}
});
```
### Creating links to sections or slidesCreating links to sections or slides
Creating links to sections or slides If you are using fullPage.js with anchor links for the sections (using the `anchors` option or the attribute `data-anchor` in each section), then you will be able to use anchor links also to navigate directly to a certain slide inside a section. This would be an example of a link with an anchor: https://alvarotrigo.com/fullPage/#secondPage/2 (which is the URL you will see once you access to that section/slide manually) Notice the last part of the URL ends in `#secondPage/2` .
Having the following initialization:
```
Copy new fullpage('#fullpage', {
anchors:['firstPage', 'secondPage', 'thirdPage']
});
```
The anchor at the end of the URL `#secondPage/2` defines the section and slide of destination respectively. In the previous URL, the section of destination will be the one defined with the anchor `secondPage` and the slide will be the 2nd slide, as we are using the index `2` for it. (the fist slide of a section has index 0, as technically it is a section). We could have used a custom anchor for the slide instead of its index if we would have used the attribute `data-anchor` on the HTML markup like so:
```
Copy <div class="section">
<div class="slide" data-anchor="slide1"> Slide 1 </div>
<div class="slide" data-anchor="slide2"> Slide 2 </div>
<div class="slide" data-anchor="slide3"> Slide 3 </div>
<div class="slide" data-anchor="slide4"> Slide 4 </div>
</div>
```
In this last case, the URL we would use would be `#secondPage/slide3` , which is the equivalent to our previous `#secondPage/2` . Note that section anchors can also be defined in the same way, by using the `data-anchor` attribute, if no `anchors` array is provided. Be careful! `data-anchor` tags can not have the same value as any ID element on the site (or NAME element for IE).
### Creating smaller or bigger sectionsCreating smaller or bigger sections
Creating smaller or bigger sections
Demo fullPage.js provides a way to remove the full height restriction from its sections and slides. It is possible to create sections which height is smaller or bigger than the viewport. This is ideal for footers. It is important to realise that it doesn't make sense to have all of your sections using this feature. If there is more than one section in the initial load of the site, fullPage.js won't scroll at all to see the next one as it will be already in the viewport.
To create smaller sections just use the class `fp-auto-height` in the section you want to apply it. It will then take the height defined by your section/slide content.
```
Copy <div class="section">Whole viewport</div>
<div class="section fp-auto-height">Auto height</div>
```
# Responsive auto height sectionsResponsive auto height sections
Responsive auto height sections Demo A responsive auto height can be applied by using the class
. This way sections will be fullscreen until the responsive mode gets fired. Then they'll take the size required by their content, which could be bigger or smaller than the viewport.
### State classes added by fullpage.jsState classes added by fullpage.js
State classes added by fullpage.js
Fullpage.js adds multiple classes in different elements to keep a record of the status of the site:
*
`active` is added the current visible section and slide. *
`active` is added to the current menu element (if using the `menu` option). * A class of the form
```
fp-viewing-SECTION-SLIDE
```
is added to the `body` element of the site. (eg:
```
fp-viewing-secondPage-0
```
) The `SECTION` and `SLIDE` parts will be the anchors (or indexes if no anchor is provided) of the current section and slide. *
`fp-responsive` is added to the `body` element when the entering in the responsive mode *
`fp-enabled` is added to the `html` element when fullpage.js is enabled. (and removed when destroyed). *
`fp-destroyed` is added to the fullpage.js container when fullPage.js is destroyed.
### Lazy LoadingLazy Loading
Lazy Loading Demo fullPage.js provides a way to lazy load images, videos and audio elements so they won't slow down the loading of your site or unnecessarily waste data transfer. When using lazy loading, all these elements will only get loaded when entering in the viewport. To enable lazy loading all you need to do is change your `src` attribute to `data-src` as shown below:
```
Copy <img data-src="image.png">
<video>
<source data-src="video.webm" type="video/webm" />
<source data-src="video.mp4" type="video/mp4" />
</video>
```
If you already use another lazy load solution which uses `data-src` as well, you can disable the fullPage.js lazy loading by setting the option `lazyLoading: false` .
### Auto play/pause embedded mediaAuto play/pause embedded media
Auto play/pause embedded media
Demo Note: the autoplay feature might not work on some mobile devices depending on the OS and browser (i.e. Safari on iOS version < 10.0).
# Play on section/slide load:Play on section/slide load:
Play on section/slide load: Using the attribute `autoplay` for videos or audio, or the param `autoplay=1` for youtube iframes will result in the media element playing on page load.
In order to play it on section/slide load use instead the attribute `data-autoplay` . For example:
# Pause on leavePause on leave
Pause on leave Embedded HTML5 `<video>` / `<audio>` and Youtube iframes are automatically paused when you navigate away from a section or slide. This can be disabled by using the attribute `data-keepplaying` . For example:
### Use extensionsUse extensions
Use extensions
fullpage.js provides a set of extensions you can use to enhance its default features. All of them are listed as fullpage.js options.
Extensions requires you to use the minified file
that is inside the `dist` folder instead of the usual fullPage.js file ( `fullpage.js` or `fullpage.min.js` ).
Once you acquire the extension file, you will need to add it before fullPage. For example, if I want to use the Continuous Horizontal extension, I would have include the extension file and then the extensions version of the fullPage file.
```
Copy <script type="text/javascript" src="fullpage.continuousHorizontal.min.js"></script>
<script type="text/javascript" src="fullpage/fullpage.extensions.min.js"></script>
```
An activation key and a license key will be required for each extension. See more details about it here.
Then you will be able to use and configure them as explained in options.
## OptionsOptions
Options
*
`licenseKey` : (default `null` ). This option is compulsory. If you use fullPage in a non open source project, then you should use the license key provided on the purchase of the fullPage Commercial License. If your project is open source and it is compatible with the GPLv3 license you can use the option `gplv3-license` . Please read more about licenses here and on the website. Example of usage: `Copy`
```
new fullpage('#fullpage', { licenseKey: 'YOUR_KEY_HERE' });
```
*
`controlArrows` : (default `true` ) Determines whether to use control arrows for the slides to move right or left. *
`controlArrowsHTML` : (default
```
['<div class="fp-arrow"></div>', '<div class="fp-arrow"></div>'],
```
). Provides a way to define the HTML structure and the classes that you want to apply to the control arrows for sections with horizontal slides. The array contains the structure for both arrows. The first item is the left arrow and the second, the right one. *
`verticalCentered` : (default `true` ) Vertically centering of the content using flexbox. You might want to wrap your content in a `div` to avoid potential issues. (Uses
```
flex-direction: column; display: flex; justify-content: center;
```
) *
`scrollingSpeed` : (default `700` ) Speed in milliseconds for the scrolling transitions. *
`sectionsColor` : (default `none` ) Define the CSS `background-color` property for each section. Example: `Copy`
```
new fullpage('#fullpage', { sectionsColor: ['#f2f2f2', '#4BBFC3', '#7BAABE', 'whitesmoke', '#000'], });
```
*
`anchors` : (default `[]` ) Defines the anchor links (#example) to be shown on the URL for each section. Anchors value should be unique. The position of the anchors in the array will define to which sections the anchor is applied. (second position for second section and so on). Using anchors forward and backward navigation will also be possible through the browser. This option also allows users to bookmark a specific section or slide. Be careful! anchors can not have the same value as any ID element on the site (or NAME element for IE). Now anchors can be defined directly in the HTML structure by using the attribute `data-anchor` as explained here. *
`lockAnchors` : (default `false` ) Determines whether anchors in the URL will have any effect at all in the library. You can still using anchors internally for your own functions and callbacks, but they won't have any effect in the scrolling of the site. Useful if you want to combine fullPage.js with other plugins using anchor in the URL. *
`easing` : (default `easeInOutCubic` ) Defines the transition effect to use for the vertical and horizontal scrolling. It requires the file
```
vendors/easings.min.js
```
or jQuery UI for using some of its transitions. Other libraries could be used instead. *
`easingcss3` : (default `ease` ) Defines the transition effect to use in case of using `css3:true` . You can use the pre-defined ones (such as `linear` , `ease-out` ...) or create your own ones using the `cubic-bezier` function. You might want to use Matthew Lein CSS Easing Animation Tool for it. *
`loopTop` : (default `false` ) Defines whether scrolling up in the first section should scroll to the last one or not. *
`loopBottom` : (default `false` ) Defines whether scrolling down in the last section should scroll to the first one or not. *
`loopHorizontal` : (default `true` ) Defines whether horizontal sliders will loop after reaching the last or previous slide or not. *
`css3` : (default `true` ). Defines whether to use JavaScript or CSS3 transforms to scroll within sections and slides. Useful to speed up the movement in tablet and mobile devices with browsers supporting CSS3. If this option is set to `true` and the browser doesn't support CSS3, a fallback will be used instead. *
`autoScrolling` : (default `true` ) Defines whether to use the "automatic" scrolling or the "normal" one. It also has affects the way the sections fit in the browser/device window in tablets and mobile phones. *
`fitToSection` : (default `true` ) Determines whether or not to fit sections to the viewport or not. When set to `true` the current active section will always fill the whole viewport. Otherwise the user will be free to stop in the middle of a section. *
`fitToSectionDelay` : (default 1000). If `fitToSection` is set to true, this delays the fitting by the configured milliseconds. *
`scrollBar` : (default `false` ) Determines whether to use scroll bar for the vertical sections on site or not. In case of using scroll bar, the `autoScrolling` functionality will still work as expected. The user will also be free to scroll the site with the scroll bar and fullPage.js will fit the section in the screen when scrolling finishes. *
`paddingTop` : (default `0` ) Defines the top padding for each section with a numerical value and its measure (paddingTop: '10px', paddingTop: '10em'...) Useful in case of using a fixed header. *
`paddingBottom` : (default `0` ) Defines the bottom padding for each section with a numerical value and its measure (paddingBottom: '10px', paddingBottom: '10em'...). Useful in case of using a fixed footer. *
`fixedElements` : (default `null` ) Defines which elements will be taken off the scrolling structure of the plugin which is necessary when using the `css3` option to keep them fixed. It requires a string with the Javascript selectors for those elements. (For example:
```
fixedElements: '#element1, .element2'
```
) *
`normalScrollElements` : (default `null` ) Demo If you want to avoid the auto scroll when scrolling over some elements, this is the option you need to use. (useful for maps, scrolling divs etc.) It requires a string with the Javascript selectors for those elements. (For example:
```
normalScrollElements: '#element1, .element2'
```
). This option should not be applied to any section/slide element itself. *
```
bigSectionsDestination
```
: (default `null` ) Demo Defines how to scroll to a section which height is bigger than the viewport and when not using `scrollOverflow:true` . (Read how to create smaller or bigger sections). By default fullPage.js scrolls to the top if you come from a section above the destination one and to the bottom if you come from a section below the destination one. Possible values are `top` , `bottom` , `null` . *
`keyboardScrolling` : (default `true` ) Defines if the content can be navigated using the keyboard. *
`touchSensitivity` : (default `5` ) Defines a percentage of the browsers window width/height, and how far a swipe must measure for navigating to the next section / slide *
`continuousVertical` : (default `false` ) Defines whether scrolling down in the last section should scroll down to the first one and if scrolling up in the first section should scroll up to the last one. Not compatible with `loopTop` , `loopBottom` or any scroll bar present in the site ( `scrollBar:true` or `autoScrolling:false` ). *
`continuousHorizontal` : (default `false` ) Extension of fullpage.js. Defines whether sliding right in the last slide should slide right to the first one or not, and if scrolling left in the first slide should slide left to the last one or not. Not compatible with `loopHorizontal` . Requires fullpage.js >= 3.0.1. *
`scrollHorizontally` : (default `false` ) Extension of fullpage.js. Defines whether to slide horizontally within sliders by using the mouse wheel or trackpad. It can only be used when using: `autoScrolling:true` . Ideal for story telling. Requires fullpage.js >= 3.0.1. *
`interlockedSlides` : (default `false` ) Extension of fullpage.js. Determines whether moving one horizontal slider will force the sliding of sliders in other section in the same direction. Possible values are `true` , `false` or an array with the interlocked sections. For example `[1,3,5]` starting by 1. Requires fullpage.js >= 3.0.1. *
`dragAndMove` : (default `false` ) Extension of fullpage.js. Enables or disables the dragging and flicking of sections and slides by using mouse or fingers. Requires fullpage.js >= 3.0.1. Possible values are:
*
`true` : enables the feature. *
`false` : disables the feature. *
`vertical` : enables the feature only vertically. *
`horizontal` : enables the feature only horizontally. *
`fingersonly` : enables the feature for touch devices only. *
`mouseonly` : enables the feature for desktop devices only (mouse and trackpad). *
`offsetSections` : (default `false` )Extension of fullpage.js. Provides a way to use non full screen sections based on percentage. Ideal to show visitors there's more content in the site by showing part of the next or previous section. Requires fullPage.js >= 3.0.1. To define the percentage of each section the attribute `data-percentage` must be used. The centering of the section in the viewport can be determined by using a boolean value in the attribute `data-centered` (default to `true` if not specified). For example: `Copy`
```
<div class="section" data-percentage="80" data-centered="true">
```
*
`resetSliders` : (default `false` ). Extension of fullpage.js. Defines whether or not to reset every slider after leaving its section. Requires fullpage.js >= 3.0.1. *
`fadingEffect` : (default `false` ). Extension of fullpage.js. Defines whether to use a fading effect or not instead of the default scrolling one. Possible values are `true` , `false` , `sections` , `slides` . It can therefore be applied just vertically or horizontally, or to both at the time. It can only be used when using: `autoScrolling:true` . Requires fullpage.js >= 3.0.1. *
`animateAnchor` : (default `true` ) Defines whether the load of the site when given an anchor (#) will scroll with animation to its destination or will directly load on the given section. *
`recordHistory` : (default `true` ) Defines whether to push the state of the site to the browser's history. When set to `true` each section/slide of the site will act as a new page and the back and forward buttons of the browser will scroll the sections/slides to reach the previous or next state of the site. When set to `false` , the URL will keep changing but will have no effect on the browser's history. This option is automatically turned off when using `autoScrolling:false` . *
`menu` : (default `false` ) A selector can be used to specify the menu to link with the sections. This way the scrolling of the sections will activate the corresponding element in the menu using the class `active` . This won't generate a menu but will just add the `active` class to the element in the given menu with the corresponding anchor links. In order to link the elements of the menu with the sections, an HTML 5 data-tag ( `data-menuanchor` ) will be needed to use with the same anchor links as used within the sections. Example: `Copy`
```
<ul id="myMenu"> <li data-menuanchor="firstPage" class="active"><a href="#firstPage">First section</a></li> <li data-menuanchor="secondPage"><a href="#secondPage">Second section</a></li> <li data-menuanchor="thirdPage"><a href="#thirdPage">Third section</a></li> <li data-menuanchor="fourthPage"><a href="#fourthPage">Fourth section</a></li> </ul>
```
`Copy`
```
new fullpage('#fullpage', { anchors: ['firstPage', 'secondPage', 'thirdPage', 'fourthPage', 'lastPage'], menu: '#myMenu' });
```
Note: the menu element should be placed outside the fullpage wrapper in order to avoid problem when using
`css3:true` . Otherwise it will be appended to the `body` by the plugin itself. *
`navigation` : (default `false` ) If set to `true` , it will show a navigation bar made up of small circles. *
`navigationPosition` : (default `none` ) It can be set to `left` or `right` and defines which position the navigation bar will be shown (if using one). *
`navigationTooltips` : (default []) Defines the tooltips to show for the navigation circles in case they are being used. Example:
```
navigationTooltips: ['firstSlide', 'secondSlide']
```
. You can also define them by using the attribute `data-tooltip` in each section if you prefer. *
`showActiveTooltip` : (default `false` ) Shows a persistent tooltip for the actively viewed section in the vertical navigation. *
`slidesNavigation` : (default `false` ) If set to `true` it will show a navigation bar made up of small circles for each landscape slider on the site. *
`slidesNavPosition` : (default `bottom` ) Defines the position for the landscape navigation bar for sliders. Admits `top` and `bottom` as values. You may want to modify the CSS styles to determine the distance from the top or bottom as well as any other style such as color. *
`scrollOverflow` : (default `true` ) defines whether or not to create a scroll for the section/slide in case its content is bigger than the height of it. In order to prevent fullpage.js from creating the scrollbar in certain sections or slides use the class `fp-noscroll` . For example:
You can also prevent scrolloverflow from getting applied on responsive mode when using
in the section element. *
`scrollOverflowReset` : (default `false` ) Extension of fullpage.js. Possible values are `true` , `false` , `sections` , `slides` . When set to `true` it scrolls up the content of the section/slide with a scroll bar when leaving to another section/slide. This way the section/slide will always show the start of its content even when scrolling from a section underneath it. Adding the class
```
fp-no-scrollOverflowReset
```
on the section or slide will disable this feature for that specific panel. *
: (default `false` ). When active, this option will use a "mac style" for the scrollbar instead of the default one, which will look quite different in Windows computers. *
`sectionSelector` : (default `.section` ) Defines the Javascript selector used for the plugin sections. It might need to be changed sometimes to avoid problem with other plugins using the same selectors as fullpage.js. *
`slideSelector` : (default `.slide` ) Defines the Javascript selector used for the plugin slides. It might need to be changed sometimes to avoid problem with other plugins using the same selectors as fullpage.js. *
`responsiveWidth` : (default `0` ) A normal scroll ( `autoScrolling:false` ) will be used under the defined width in pixels. A class `fp-responsive` is added to the body tag in case the user wants to use it for their own responsive CSS. For example, if set to 900, whenever the browser's width is less than 900 the plugin will scroll like a normal site. *
`responsiveHeight` : (default `0` ) A normal scroll ( `autoScrolling:false` ) will be used under the defined height in pixels. A class `fp-responsive` is added to the body tag in case the user wants to use it for their own responsive CSS. For example, if set to 900, whenever the browser's height is less than 900 the plugin will scroll like a normal site. *
`responsiveSlides` : (default `false` ) Extension of fullpage.js. When set to `true` slides will be turned into vertical sections when responsive mode is fired. (by using the `responsiveWidth` or `responsiveHeight` options detailed above). Requires fullpage.js >= 3.0.1. *
`parallax` : (default `false` ) Extension of fullpage.js. Defines whether or not to use the parallax backgrounds effects on sections / slides. Read more about how to apply the parallax option. *
`parallaxOptions` : (default:
```
{ type: 'reveal', percentage: 62, property: 'translate'}
```
). Allows to configure the parameters for the parallax backgrounds effect when using the option `parallax:true` . Read more about how to apply the parallax option. *
`dropEffect` (default `false` ) Extension of fullpage.js. Defines whether or not to use the drop effect on sections / slides. Read more about how to apply the drop effect option. *
`dropEffectOptions` : (default:
```
{ speed: 2300, color: '#F82F4D', zIndex: 9999}
```
). Allows to configure the parameters for the drop effect when using the option `dropEffect:true` .Read more about how to apply the drop effect option. *
`waterEffect` (default `false` ) Extension of fullpage.js. Defines whether or not to use the water effect on sections / slides. Read more about how to apply the water effect option. *
`waterEffectOptions` : (default:
```
{ animateContent: true, animateOnMouseMove: true}
```
). Allows to configure the parameters for the water effect when using the option `waterEffect:true` .Read more about how to apply the water effect option. *
`cards` : (default `false` ) Extension of fullpage.js. Defines whether or not to use the cards effect on sections/slides. Read more about how to apply the cards option. *
`cardsOptions` : (default:
```
{ perspective: 100, fadeContent: true, fadeBackground: true}
```
). Allows you to configure the parameters for the cards effect when using the option `cards:true` . Read more about how to apply the cards option. *
`lazyLoading` : (default `true` ) Lazy loading is active by default which means it will lazy load any media element containing the attribute `data-src` as detailed in the Lazy Loading docs . If you want to use any other lazy loading library you can disable this fullpage.js feature. *
`observer` : (default `true` ) Defines whether or not to observe changes in the HTML structure of the page. When enabled, fullPage.js will automatically react to those changes and update itself accordingly. Ideal when adding, removing or hidding sections or slides. *
`credits` . (default
```
{enabled: true, label: 'Made with fullpage.js', position: 'right'}
```
). Defines whether to use fullPage.js credits. As per clause 0, 4, 5 and 7 of the GPLv3 licecense, those using fullPage.js under the GPLv3 are required to give prominent notice that fullPage.js is in use. We recommend including attribution by keeping this option enabled.
## MethodsMethods
Methods
You can see them in action here
### getActiveSection()getActiveSection()
getActiveSection()
Demo Gets an Object (type Section) containing the active section and its properties.
```
Copy fullpage_api.getActiveSection();
```
### getActiveSlide()getActiveSlide()
getActiveSlide()
Demo Gets an Object (type Slide) containing the active slide and its properties.
```
Copy fullpage_api.getActiveSlide();
```
### getScrollY() & getScrollXgetScrollY() & getScrollX
getScrollY() & getScrollX Demo `getScrollY` Gets the Y position of the fullPage wrapper. `getScrollX` gets the X position of the active horizontal slide.
```
Copy fullpage_api.getScrollY();
fullpage_api.getScrollX();
```
### moveSectionUp()moveSectionUp()
moveSectionUp()
Demo Scrolls one section up:
```
Copy fullpage_api.moveSectionUp();
```
### moveSectionDown()moveSectionDown()
moveSectionDown()
Demo Scrolls one section down:
```
Copy fullpage_api.moveSectionDown();
```
### moveTo(section, slide)moveTo(section, slide)
moveTo(section, slide)
Demo Scrolls the page to the given section and slide. The first section will have the index 1 whilst the first slide, the visible one by default, will have index 0.
```
Copy //Scrolling to the 3rd section (with index 3) in the site
fullpage_api.moveTo(3, 0);
//Which is the same as
fullpage_api.moveTo(3);
```
### silentMoveTo(section, slide)silentMoveTo(section, slide)
silentMoveTo(section, slide) Demo Exactly the same as `moveTo` but in this case it performs the scroll without animation. A direct jump to the destination.
### moveSlideRight()moveSlideRight()
moveSlideRight()
Demo Scrolls the horizontal slider of the current section to the next slide:
```
Copy fullpage_api.moveSlideRight();
```
### moveSlideLeft()moveSlideLeft()
moveSlideLeft()
Demo Scrolls the horizontal slider of the current section to the previous slide:
### setAutoScrolling(boolean)setAutoScrolling(boolean)
setAutoScrolling(boolean) Demo Sets the scrolling configuration in real time. Defines the way the page scrolling behaves. If it is set to `true` , it will use the "automatic" scrolling, otherwise, it will use the "manual" or "normal" scrolling of the site.
```
Copy fullpage_api.setAutoScrolling(false);
```
### setFitToSection(boolean)setFitToSection(boolean)
setFitToSection(boolean) Demo Sets the value for the option `fitToSection` determining whether to fit the section in the screen or not.
```
Copy fullpage_api.setFitToSection(false);
```
### fitToSection()fitToSection()
fitToSection()
Demo Scrolls to the nearest active section fitting it in the viewport.
```
Copy fullpage_api.fitToSection();
```
### setLockAnchors(boolean)setLockAnchors(boolean)
setLockAnchors(boolean) Demo Sets the value for the option `lockAnchors` determining whether anchors will have any effect in the URL or not.
```
Copy fullpage_api.setLockAnchors(false);
```
### setAllowScrolling(boolean, [directions])setAllowScrolling(boolean, [directions])
setAllowScrolling(boolean, [directions]) Demo Adds or remove the possibility of scrolling through sections/slides by using the mouse wheel/trackpad or touch gestures (which is active by default). Note this won't disable the keyboard scrolling. You would need to use `setKeyboardScrolling` for it.
```
Copy
//disabling scrolling
fullpage_api.setAllowScrolling(false);
//disabling scrolling down
fullpage_api.setAllowScrolling(false, 'down');
//disabling scrolling down and right
fullpage_api.setAllowScrolling(false, 'down, right');
```
### setKeyboardScrolling(boolean, [directions])setKeyboardScrolling(boolean, [directions])
setKeyboardScrolling(boolean, [directions])
Demo Adds or remove the possibility of scrolling through sections by using the keyboard (which is active by default).
```
Copy //disabling all keyboard scrolling
fullpage_api.setKeyboardScrolling(false);
//disabling keyboard scrolling down
fullpage_api.setKeyboardScrolling(false, 'down');
//disabling keyboard scrolling down and right
fullpage_api.setKeyboardScrolling(false, 'down, right');
```
### setRecordHistory(boolean)setRecordHistory(boolean)
setRecordHistory(boolean)
Demo Defines whether to record the history for each hash change in the URL.
```
Copy fullpage_api.setRecordHistory(false);
```
### setScrollingSpeed(milliseconds)setScrollingSpeed(milliseconds)
setScrollingSpeed(milliseconds)
Demo Defines the scrolling speed in milliseconds.
```
Copy fullpage_api.setScrollingSpeed(700);
```
### destroy(type)destroy(type)
destroy(type)
Demo Destroys the plugin events and optionally its HTML markup and styles. Ideal to use when using AJAX to load content.
*
`type` : (optional parameter) can be empty or `all` . If `all` is passed, the HTML markup and styles used by fullpage.js will be removed. This way the original HTML markup, the one used before any plugin modification is made, will be maintained.
```
Copy //destroying all Javascript events created by fullPage.js (scrolls, hashchange in the URL...)
fullpage_api.destroy();
//destroying all Javascript events and any modification done by fullPage.js over your original HTML markup.
fullpage_api.destroy('all');
```
### reBuild()reBuild()
reBuild() Updates the DOM structure to fit the new window size or its contents. Ideal to use in combination with AJAX calls or external changes in the DOM structure of the site, specially when using `scrollOverflow:true` .
```
Copy fullpage_api.reBuild();
```
### setResponsive(boolean)setResponsive(boolean)
setResponsive(boolean) Demo Sets the responsive mode of the page. When set to `true` the autoScrolling will be turned off and the result will be exactly the same one as when the `responsiveWidth` or `responsiveHeight` options get fired.
### responsiveSlides.toSections()responsiveSlides.toSections()
responsiveSlides.toSections()
Extension of fullpage.js. Requires fullpage.js >= 3.0.1. Turns horizontal slides into vertical sections.
```
Copy fullpage_api.responsiveSlides.toSections();
```
### responsiveSlides.toSlides()responsiveSlides.toSlides()
responsiveSlides.toSlides()
Extension of fullpage.js. Requires fullpage.js >= 3.0.1. Turns back the original slides (now converted into vertical sections) into horizontal slides again.
```
Copy fullpage_api.responsiveSlides.toSlides();
```
## CallbacksCallbacks
Callbacks
Demo You can see them in action here.
Some callbacks, such as `onLeave` will contain Object type of parameters containing the following properties:
*
`anchor` : (String) item's anchor. *
`index` : (Number) item's index. *
`item` : (DOM element) item element. *
`isFirst` : (Boolean) determines if the item is the first child. *
`isLast` : (Boolean) determines if the item is the last child.
### afterLoad (
`origin` , `destination` , `direction` , `trigger` )afterLoad ( `origin` , `destination` , `direction` , `trigger` )afterLoad ( `origin` , `destination` , `direction` , `trigger` )
Demo Callback fired once the sections have been loaded, after the scrolling has ended. Parameters:
afterLoad: function(origin, destination, direction, trigger){
var origin = this;
//using index
if(origin.index == 2){
alert("Section 3 ended loading");
}
//using anchorLink
if(origin.anchor == 'secondSlide'){
alert("Section 2 ended loading");
}
}
});
```
### onLeave (
`origin` , `destination` , `direction` , `trigger` )onLeave ( `origin` , `destination` , `direction` , `trigger` )onLeave ( `origin` , `destination` , `direction` , `trigger` ) Demo This callback is fired once the user leaves a section, in the transition to the new section. Returning `false` will cancel the move before it takes place.
Parameters:
```
Copy new fullpage('#fullpage', {
onLeave: function(origin, destination, direction, trigger){
var leavingSection = this;
//after leaving section 2
if(origin.index == 1 && direction =='down'){
alert("Going to section 3!");
}
else if(origin.index == 1 && direction == 'up'){
alert("Going to section 1!");
}
}
});
```
### beforeLeave (
`origin` , `destination` , `direction` , `trigger` )beforeLeave ( `origin` , `destination` , `direction` , `trigger` )beforeLeave ( `origin` , `destination` , `direction` , `trigger` )
Demo This callback is fired right before leaving the section, just before the transition takes place.
You can use this callback to prevent and cancel the scroll before it takes place by returning `false` .
Parameters:
### afterRender()afterRender()
afterRender()
Demo This callback is fired just after the structure of the page is generated. This is the callback you want to use to initialize other plugins or fire any code which requires the document to be ready (as this plugin modifies the DOM to create the resulting structure). See FAQs for more info.
Example:
```
Copy new fullpage('#fullpage', {
afterRender: function(){
var pluginContainer = this;
alert("The resulting DOM structure is ready");
}
});
```
### afterResize(
`width` , `height` )afterResize( `width` , `height` )afterResize( `width` , `height` )
Demo This callback is fired after resizing the browser's window. Just after the sections are resized.
*
`width` : (Number) window's width. *
`height` : (Number) window's height.
Example:
```
Copy new fullpage('#fullpage', {
afterResize: function(width, height){
var fullpageContainer = this;
alert("The sections have finished resizing");
}
});
```
### afterReBuild()afterReBuild()
afterReBuild() Demo This callback is fired after manually re-building fullpage.js by calling
```
fullpage_api.reBuild()
```
.
Example:
```
Copy new fullpage('#fullpage', {
afterReBuild: function(){
console.log("fullPage.js has manually being re-builded");
}
});
```
### afterResponsive(
`isResponsive` )afterResponsive( `isResponsive` )afterResponsive( `isResponsive` )
Demo This callback is fired after fullpage.js changes from normal to responsive mode or from responsive mode to normal mode.
*
`isResponsive` : (Boolean) determines if it enters into responsive mode ( `true` ) or goes back to normal mode ( `false` ).
Example:
```
Copy new fullpage('#fullpage', {
afterResponsive: function(isResponsive){
alert("Is responsive: " + isResponsive);
}
});
```
### afterSlideLoad (
`section` , `origin` , `destination` , `direction` , `trigger` )afterSlideLoad ( `section` , `origin` , `destination` , `direction` , `trigger` )afterSlideLoad ( `section` , `origin` , `destination` , `direction` , `trigger` )
Demo Callback fired once the slide of a section have been loaded, after the scrolling has ended.
afterSlideLoad: function( section, origin, destination, direction, trigger){
var loadedSlide = this;
//first slide of the second section
if(section.anchor == 'secondPage' && destination.index == 1){
alert("First slide loaded");
}
//second slide of the second section (supposing #secondSlide is the
//anchor for the second slide)
if(section.index == 1 && destination.anchor == 'secondSlide'){
alert("Second slide loaded");
}
}
});
```
### onSlideLeave (
`section` , `origin` , `destination` , `direction` , `trigger` )onSlideLeave ( `section` , `origin` , `destination` , `direction` , `trigger` )onSlideLeave ( `section` , `origin` , `destination` , `direction` , `trigger` ) Demo This callback is fired once the user leaves an slide to go to another, in the transition to the new slide. Returning `false` will cancel the move before it takes place.
Parameters:
```
Copy new fullpage('#fullpage', {
onSlideLeave: function( section, origin, destination, direction, trigger){
var leavingSlide = this;
//leaving the first slide of the 2nd Section to the right
if(section.index == 1 && origin.index == 0 && direction == 'right'){
alert("Leaving the fist slide!!");
}
//leaving the 3rd slide of the 2nd Section to the left
if(section.index == 1 && origin.index == 2 && direction == 'left'){
alert("Going to slide 2! ");
}
}
});
```
# Cancelling a move before it takes placeCancelling a move before it takes place
Cancelling a move before it takes place You can cancel a move by returning `false` on the `onSlideLeave` callback. Same as when canceling a movement with `onLeave` .
### onScrollOverflow (
`section` , `slide` , `position` , `direction` )onScrollOverflow ( `section` , `slide` , `position` , `direction` )onScrollOverflow ( `section` , `slide` , `position` , `direction` ) Demo This callback gets fired when a scrolling inside a scrollable section when using the fullPage.js option `scrollOverflow: true` .
Parameters:
*
`section` : (Object) active vertical section. *
`slide` : (Object) horizontal slide of origin. *
`position` : (Integer) scrolled amount within the section/slide. Starts on 0. *
`direction` : (String) `up` or `down`
Example:
```
Copy new fullpage('#fullpage', {
onScrollOverflow: function( section, slide, position, direction){
console.log(section);
console.log("position: " + position);
}
});
```
### Do you have questions?
Find answers for your questions on fullPage.js, its extensions and WordPress plugins.
Go to Help Center
# Fullpane demo buy
# Drag & Drop Builder
Like what you see on our demo? Quickly replicate your site like our demo setup (it is erasable). Then you just need to replace the images and text. All done!
Style almost every element of the theme from header to footer. Just point and select, and see it live on the preview. No CSS coding required.
Easily insert banner ads anywhere in the template without any coding
One-click auto theme update in admin panel, no need to download or FTP.
Establish your brand with customizable website logo text, or upload your own logo.
Carefully coded with search engine optimization (SEO) best practices in mind.
Over 600+ Google Web Fonts can be selected for font styling.
Themify comes with Social Icons right out of the box, no need to find your own.
Translate or create a multilingual site with WPML plugin with Themify themes.
Transform your website into an online shop with WooCommerce, fully supported in Themify.
Themify themes are guaranteed to work with the latest version of WordPress.
# fullPage.js plugin for Gutenberg
Create a full-screen page by using the official plugin for WordPress.
Buy Now!
FullPage is fully responsive. You can turn it off under certain widh or height and even allow sections to be bigger than the viewport so content can fit.
Working perfectly in mobile phones and tablets. Designed for all, not just a few! Scroll, use the mousewheel or drag and move! You choose your experience!
Sections and slides can be smaller than the viewport height if required. Because we don't always need everything in full-screen! Ideal for footers!
Fullpage.js is compatible with all modern browsers and even some old ones like Internet Explorer 9! Get a smooth slidehow for your WordPress Gutenberg builder!
# fullPage.js plugin for Elementor
Create a full-screen page by using the official Elementor plugin for WordPress. Scroll snap like a boss!
Buy Now Check Demo
Use fullPage.js with one of the world's leading WordPress page builders. Drag and drop features, visual design, pixel perfect, developer friendly, responsive etc.
Creating a fullpage site is much more than just having 100vh sections. With fullPage.js you have total control over more than 50 different configurations.
Fully compatible with all browsers, even IE 9! And working with any theme! Continue to use your favorite tools and enhance your site with the most popular scrolling effect!
We are the creators of the fullPage.js JavaScript library, the most popular framework to create full screen snap sites. Who better to develop a plugin with this effect than the team that created the technology to do so?
fullpage isn't just a snap scrolling effect ported to WordPress. It is a whole framework with more than 50 options that allows you to create a site exactly as you want to. We are focused on creating the best fullscreen snap experience.
This WordPress add-on was born because fullPage users were tired of finding poor implemenations of fullscreen snap effects that only made them lose their time and money. We listened to their complains and created the most complete yet easy to use plugin of its kind.
# Buy a fullPage license
$15
$159
Open Source | Hobby / | Business | OEM |
| --- | --- | --- | --- |
License type | The GPLv3 license | Commercial | Commercial | OEM/SaaS Commercial |
Number developers | Unlimited | 1 developer / | Unlimited developers | Unlimited developers |
Number of servers and sites | Unlimited | 1 / Unlimited | Unlimited | Unlimited |
Premium support | ― | ― |
Mobile applications | ― |
Downloadable Software | ― | ― | Contact us |
Distribution rights (SaaS / OEM) | ― | ― | Contact us |
Intranet | ― | ― | ― | Contact us |
fullPage is an Open Source application licensed under a GPLv3 license. This license allows you to use fullPage in Open Source projects but it requires your project to be public, provide attribution and be licensed under GPLv3. Questions? Read the GPL FAQ.
Commercial Licenses are priced per team member. A team member is someone on your team working on a project that uses Fullpage. This includes implementing, integrating, or changing the code.
All licenses include 12 months of free updates up to the major version. Renew your plan to continue receiving updates for another year or use a "One-time payment" plan.
No matter the purpose. If your site/app is not open sourced and compatible with GPLv3, you need to get a fullPage.js Commercial License. That's the only way to get a non GPLv3 license and keep your code propietary, to yourself.
Unless you need distribution rights (OEM / SaaS license), with the Team and Business licenses you pay only once if you choose a "One-time payment" plan and use it in as many sites as you want! No monthly or yearly fees and free updates!
You'll get a link to the invoice on the purchase confirmation email sent by Gumroad, the company dealing with the secure payment. Want to add company VAT number? Read this.
fullPage.js by @IMAC2 is awesome! Extensions are super, as well. Great docs. Highly recommended! Fast support response, as well!
We're proud to showcase one of our latest projects for Highball Brands, using the amazing fullPage by @IMAC2 -
Without fullpage.js the existence of my website would have been quite complicated. I'm not a developer, so the only option to implement my idea was to use "fullpage for Elementor" by @IMAC2 Thank you Álvaro and company! Great job!
Enjoying playing with @fullpagejs for my marketing site. It feels good to get this style of site structure under my belt. Few clients asking
#fullpage.js is the best One-page solution that I’ve ever used. So pleased with it, that I always come back to it....and every time just better and better. So coooool!
Just started using fullpage.js by @IMAC2, so far very impressed. A+ support and the extensions and another level of functionality by just pasting a few lines of code. Can't wait to play with it a bit more.
Thanks for fullpage.js @IMAC2 It is so much fun to work with! http://alvarotrigo.com/fullPage/
@IMAC2 Ommmmmmg, totally in love with fullPage.js. I didn't know it.
Sure! You can download it here or find the code on Github.
You can also use different CDNs and servicies like npm and bower. More about it in the docs.
You can choose between the annual payment subscription and the one-time payment. Annual subscriptions include free updates for the next 12 months. One-time payments include free updates until a major version.
When a new major version becomes available. Such as changing from fullPage.js v4 to fullPage.js v5 (not when changing from fullpage.js 4.1 to fullPage.js 4.2) you will have to purchase a new license if you want to update.
No. fullPage.js commercial license only provides the license for fullpage.js.
fullPage.js extensions are sold separately through the extensions page.
Yes! If you want to use fullPage to develop any project that is not open sourced or that is not compatible with GPLv3 license, like sites, themes, projects, and applications, you should use the Commercial license (Hobby, Professional or Business). With this option, your source code is kept proprietary. Which means, you won't have to change your whole application source code to an open source license.
Notice fullpage is released under the GPLv3 license, which allows you to use it for open source projects only. Read more to learn about the GPLv3 license.
All licenses are for life except the OEM license which requires annual renewal for distribution rights.
Your license also comes with a subscription of updates until a major version is released. If we release something new during that period, you'll get it for free!
Sure! You can use the source code and edit it, so you can easily integrate it with your product.
Just bare in mind you won't be to get free support for those modifications and that your won't be able to use it together with fullPage.js extensions, as they require another fullpage file which is minified. Contact us if you need this feature.
Additionally, you can contract support servicies for 80 GBP an hour. Contact us.
Any other license includes basic support by the Github issues forum or Stackoverflow using the tag "fullpage.js".
The Team and Business licenses allows you to use the code on an unlimited number of domains, servers, and projects, regardless of the environment type (production, QA etc.)
When using the OEM/SaaS license and the Hobby license you'll need a license per project that requieres distribution rights.
Not this time! We don't offer refunds once the purchase has been made. But you can of course try fullPage.js before purchasing the license, making sure that it fits your requirements. So please make sure to do so before the purchase.
Sure! It is in fact very easy!
Yes, you do. Extensions for version 3 of fullPage are not compatible with fullPage version 4.
Sure! Gumroad, the platform used for the purchase, allows you to add a VAT number. However, you are only allowed to do so after the purchase.
As detailed in Gumroad's website, you'll be able to get a full refund for the VAT value in 2-3 days after the request
Check out the fulPage.js Help Center where you'll be able to find many more answers.
This Software License Agreement (the “Agreement”) is between Yoloop SL (“Yoloop SL”) and You (including your agents and affiliates), a commercial licensee of Yoloop SL's software. If you have not purchased a Fullpage commercial license from alvarotrigo.com, these terms do not apply to you, and your use of the Yoloop SL's software is instead governed by the GNU General Public License, version 3.
Purchasing a Fullpage Commercial Hobby License applies this Agreement to you. Otherwise, you may use fullPage.js under the GPLv3.
Subject to the terms of this Agreement, Yoloop SL grants to You a revocable, non-exclusive, non-transferable license: (i) for one (1) Licensed Developer to use the Software to create a single Application; (ii) but with no permision to distribute the software in downlodable or installable software; and (iii) for End Users to use the Software as incorporated into Your Applications in accordance with the terms of this Agreement.
This license grants you one individual Developer to use fullpage.js for an unlimited amount of Applications. It is not exclusive — others companies and developers can use fullpage.js — and is non-transferable — you cannot transfer this license to another company.
This license is for use of fullpage.js, not ownership of its intellectual property. Yoloop SL continues to own fullpage.js.
You cannot create something very similar to fullpage.js, like a super-fullpage.js. Make something unique.
You may not distribute the Software or Modifications.
This Agreement applies for as long as you use fullpage.js.
Each party shall have the right to terminate this Agreement and the license granted hereunder with notice if the other party breaches any of the material terms of this Agreement, and such party fails to cure such material breach within thirty (30) days of receipt of notice from the non-breaching party. Upon termination of this Agreement, all licenses granted to You in this Agreement shall terminate automatically and You shall immediately cease use and distribution of the Software.
We can end this Agreement if you breach any terms, and do not resolve the breach after 30 days after notice from Yoloop SL. After termination, you must stop using and distributing fullpage.js.
Upon expiration or termination of the Agreement You will immediately: (i) discontinue any use of Deliverables, including by Authorized Users; (ii) remove and/or return them to Licensor; and (iii) certify to said discontinuance and removal or return of the Deliverables.
If someone sues Yoloop SL as a result of your use of fullpage.js, you are responsible for any resulting cost to Yoloop SL.
All payments under this Agreement are due to Yoloop SL upon Your purchase of a license to the Software.
You agree to pay the Subscription fees specified on a dedicated webpage (https://alvarotrigo.com/fullPage/pricing) throughout the term of the Agreement. Where the payments are made:
There's an annual subscription payment that will be paid in advance for the subsequent year. Recurrent payment will be charged automatically.
In case of acquiring the one-time purchase, there's a single payment charged in advance. You'll get a permanent access to the major version of fullPage.js.
Yoloop SL will provide you with a License Key with each purchase. A new Key is delivered to You every 12 months, unless when using a one-time payment License.
The annual License Key works with the Software versions released within 12 months after the purchase. If You want to use the Software in any version released after the 12-month period, the License Key must be replaced with a new one.
You'll be provided with a License Key on every recurrent payment or when acquiring a one-time payment license.
An annual License Key will work in fullPage.js versions released within 12 months after the purchase. If you want to update to a nonsupported version you'll have to acquire another license.
During the term of this agreement, Licensee who uses a license with basic technical support included has access to the Software's online support services via the project's GitHub issues forum or stackoverflow with the fullpage.js tag, which means that Licensee will get answers to technical questions with no guarantee of solving the issue and whenever we can provide them.
If Licensee benefits of Premium Support (Business License) he will be able to ask through email, phone or skype and he will get answers within three (3) business days and issues reported by him will have higher priority.
An isolated reproduction might be required upon request. This is, a reproduction of the scenario with the less minium external code.
Support is povided by mail. Support does not include questions related to other programming languages or frameworks and only provides questions regading the Software's implementation, issues or bugs.
SaaS shall mean a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted;
SaaS Marketplace shall mean a Marketplace that either (i) offers features to one or more of the parties or (ii) uses a software component to monetize the transaction through a transaction fee or a subscription.
OEM shall mean when the software requires on-premise installation.
Marketing. You agree to Yoloop SL’s use of Your name, trade name, and trademark, for use in Yoloop SL’s marketing materials and its website, solely to identify you as a customer of Yoloop SL.
Vertical Continuous Scrolling Enabled
Keep on scrolling on the same direction when reaching the first/last section.
And it will animate down to the first section
No Anchor Links in the URL (#)
Sections won't contain anchor links. It's more simple to configure but a bit more restrictive.
But back button won't work!
# Navigation dots
An easy and beautiful way to navigate throw the sections
You can even click on the navigation and jump directly to another section.
# Sliders navigation bullets
Create a navigation for your horizontal sliders
Give visitors a reference and allow them to jump directly to another slide.
b"It's a lightweight library JavaScript that help you make full-screen pages with a snap scroll effect. Learn more"
It's a lightweight library JavaScript that help you make full-screen pages with a snap scroll effect.
Learn more b'Importing an extra file and adding a single option. The effect is fully customizable. You can learn more on the Water Effect documentation.'
Importing an extra file and adding a single option. The effect is fully customizable. You can learn more on the
Water Effect documentation
.
b'Sure! You\'ll be able to define which sections have the effect and on which directions by simply adding the attribute data-drop in the sections where you want to have it.\n For example, if you only want to have the drop effect on the first section when scrolling down you can use data-drop="down" on the first .section element.'
For example, if you only want to have the drop effect on the first section when scrolling down you can use data-drop="down" on the first .section element.
b'Yeap! Just use the methods fullpage_api.waterEffect.turnOn(); or fullpage_api.waterEffect.destroy(); whenever you need to.'
Yes! You can in fact disable it whenever you want, but if you want to base it on the screen dimensions that's fine too. It's as easy as using the responsive options available in fullPage.js: responsiveWidth or responsiveHeight, where you define a threshold in pixels.
b'Nope! However you are entitled to a 30 days full refund as long as you do not activate the effect for any domain or ask for support. Read license terms.'
Nope! However you are entitled to a 30 days full refund as long as you do not activate the effect for any domain or ask for support. Read license terms.
b'It will be available on the 30th of July for the official fullPage.js plugin for Elementor and Gutenberg builders for WordPress.\n And soon after that for Divi too!'
It will be available on the 30th of July for the official fullPage.js plugin for Elementor and Gutenberg builders for WordPress.
And soon after that for Divi too!
b'No. But you can get this effect and many more under a single package for a very reduced price. Check it out!'
No. But you can get this effect and many more under a single package for a very reduced price. Check it out!
b"If you haven't seen you question answered here you can contact us and we will happily answer your questions."
If you haven't seen you question answered here you can contact us and we will happily answer your questions.
# CARDS
Here you can define the extension settings
Use the wow factor to impress your visitors!
So easy that it is even scary!
fullPage.js is used by the biggest
companies in the planet.
Turn it off whenever you need.
Make use of fullPage.js responsive options.
# DROP
It's a lightweight library JavaScript that help you make full-screen pages with a snap scroll effect. Learn more
How can I use this effect?
Importing an extra file and adding a single option. The effect is fully customizable. You can learn more on the Drop Effect documentation.
Can I use this effect in specific sections only?
For example, if you only want to have the drop effect on the first section when scrolling down you can use data-drop="down" on the first .section element.
Can I turn it off an on dynamically?
Yeap! Just use the methods fullpage_api.dropEffect.turnOn(); or fullpage_api.dropEffect.destroy(); whenever you need to.
Can I disable the effect on mobile phones?
Can I try it before buying it?
Nope! However you are entitled to a 30 days full refund as long as you do not activate the effect for any domain or ask for support. Read license terms.
Is the Drop Effect available for WordPress?
It couldn't be otherwise! Yes, it is! Available for the official fullPage.js plugin for Elementor and Gutenberg builders for WordPress.
# Responsive Width
This example will turns the page into normal scroll when the viewport width gets smaller than 1100px.
The same could be applied based on the height.
# Scrolling inside sections
Easy and useful! Just make use of the option `scrollOverflow` for it and add the `scrolloverflow` vendor libary!
Eu nec ferri molestie consequat, vel at alia dolore putant. Labore philosophia ut vix. In vis nostrud interesset appellantur, vis et tation feugiat scripserit. Te nec noster suavitate persecuti. Diceret erroribus cu vix, alii omnes ei sit. Sea an corrumpit patrioque, virtute accumsan nominavi et usu, ex mei dolore vocibus incorrupte. Duo ea saperet tacimates. Sed possim prodesset no, per novum putent doctus ea. Eu mea magna aliquip graecis, pri corpora officiis complectitur ei, lorem saepe consetetur his ad. Meis consulatu ei vis, an altera ocurreret interesset qui. Eu ponderum comprehensam his, case antiopam pri te. Mel ne partem consequat instructior. Ad dicunt malorum sea, ex qui omnes invenire gubergren. Ius cu autem aliquando, pri vide ornatus perpetua an, no has epicuri verterem. Nam at animal pertinax efficiantur. Per alienum torquatos eu. Sed saepe quodsi et, ullum choro definitionem sed et. Ullum elitr comprehensam sed at, sint illum propriae per eu. Eu enim laudem reformidans vel. Pro clita quando ad. Usu te virtute quaestio, ut eruditi tacimates volutpat per. Affert accusamus duo ex, ea pri habeo graece, cu magna dolorum sea. Quas dictas assueverit ad qui, cu duo harum fabulas apeirian, ullum gubergren et sit. Ne cetero recusabo adipiscing quo, cu harum quaestio neglegentur cum. Has tation aliquip ad, virtute volumus definitionem mel ne. Nobis audiam civibus ius at. Ei eum hinc mutat inciderint. Cu maluisset assentior per, graecis ponderum no mel. Eum eu vitae quando gloriatur, cum graece percipitur no, sed errem maiestatis te. Sed porro epicuri te, ad nam malorum verterem. Ea zril aliquip euismod sed.
Content is a priority. Even if it is so large :)
# Fixed elements
Create your own headers and footers
You will need to place your header and footer outside the plugin's wrapper. This way it won't move on scrolling. Take a look at the source code of this page.
# One Single Section
$(document).ready(function() { |
| --- |
$('#fullpage').fullpage({ |
autoScrolling: false, |
fitToSection: false |
}); |
}); |
<div id="myContainer"> |
| --- |
<div class="section"> |
<div class="slide"> |
<h1>fullPage.js</h1> |
</div> |
<div class="slide"> |
<h1>And a normal website under it</h1> |
</div> |
<div class="slide"> |
<h1>A great single slider!</h1> |
</div> |
</div> |
</div> |
Place the rest of the site. |
<p>Just a demo</p> |
# fullPage.js plugin for Divi builder
Create a full-screen page by using the official Divi plugin for WordPress. Want to get notified about its release? Subscribe!
Use fullPage.js with one of the top builders for WordPress. Drag and drop features, true visual editing, custom css code, responsive editing etc.
Working perfectly in touch screen devices such as phones and tablets. Scroll, use the mousewheel or drag and move! Combine it with Divi and make a stunning site!
Totally compatible with all browsers! Even Internet Explorer 9! Get a smooth slidehow for your WordPress Divi builder!
origin: {"anchor":"firstPage","item":{},"index":0,"isLast":false,"isFirst":true,"isActive":true}
destination: {"anchor":"firstPage","item":{},"index":0,"isLast":false,"isFirst":true,"isActive":true}
direction: null
Callbacks!
All the callbacks you need to do whatever you need.
Just what you would expect.
Total control over your website.
moveSectionUp moveSectionDown silentMoveTo(2) moveTo(2,3) silentMoveTo(2,3) moveSlideRight moveSlideLeft setAutoScrolling(true) setAutoScrolling(false) setAllowScrolling(true) setAllowScrolling(false) setKeyboardScrolling(true) setKeyboardScrolling(false) setScrollingSpeed(1500) setScrollingSpeed(700) destroy('all') undestroy reBuild setLockAnchors(true) setLockAnchors(false) First slide Second slide Third slide Fourth slide Section 1 Slide 1.2 Slide 1.3 Section 2 Slide 2.1 Slide 2.2 Slide 2.3 Section 3 Section 4 Made with fullPage.js Wordpress theme Download Examples Navigation Scroll Bar Enabled Normal scrolling Continuous vertical Without anchor links (same URL) Vertical navigation dots Horizontal navigation dots Design Responsive Full backgrounds Fixed backgrounds Full background videos Auto-height sections Gradient backgrounds Scrolling inside sections and slides Adding fixed header and footer One single section Extensions Parallax Continuous horizontal Interlocked Slides Reset Sliders Responsive Slides Horizontal mouse wheel Fading Effect Offset Sections Drag And Move ScrollOverflow Reset Other Animations on scrolling Callbacks Autoplay media Functions and methods Extra navigations Documentation Contact firstPage secondPage 3rdPage 4thpage
# CSS Shadow Gradients
creator of fullPage.js
Because we 💖 shadows
To use the CSS gradient shadow you'll be using a pseudo element. Also a parent with relative position and background color.
The CSS Shadows Gradient Generator was a tool that I was missing.
I then found a simple way to get gradient shadows by using the linear-gradient function for the CSS background property.
Now, it was time to make the world a better place and make it easy for others too.
A shadow gradient is a shadow that contains a transition between two or more colors.
The use of gradients is a very popular trend in web design nowadays. They are usually used as a way to grab the attention of the visitor or to create a modern kind of look. They are used in logos, apps, web design, and practically anywhere nowadays.
Traditionally gradients have been used mainly on backgrounds due to technical restrictions. But nowadays there are workarounds to bring gradients into shadows too.
Ironically, a shadow gradient is not using any CSS shadow property.
You've probably heard of `box-shadow` and `text-shadow` properties. Both are used to create shadows in elements or texts.
However, these properties won't provide developers the flexibility they need in order to create shadows with gradients on them.
The shadow is created using the CSS pseudo-elements `:before` and `:after` and applying a gradient background on each of them.
Then, you apply a blur effect on those elements and you position them behind the main element by using an absolute positioned element and the `z-index` property.
So, put in other words. The gradient shadow is a blurred gradient background that is behind the main element.
To create a gradient of colors in CSS you have to use the linear-gradient function within the `background` property. It looks like this:
```
background: linear-gradient(20deg, black, yellow);
```
So, in this case, we would have a gradient from black to yellow at a 20deg angle.
You can change the angle for a direction instead. Such as `to top left` or `to bottom right`. And you can also add as many colors as you want.
Something else you can do is change the distribution of the colors. Let's say that you want 80% of the gradient to be black, and only 20% yellow. Then you can add this parameter too.
Here's another example with 3 colors, a direction instead of a degree and different color distributions:
```
background: linear-gradient(to top left, black 20%, yellow 60%, green 20%);
```
Here's the codepen:
Creator of fullPage.js
# What I Use_
Date: 2018-01-01
Categories:
Tags:
What I Use _ Here's a list of things I use as a developer & creator. (Some affiliate links): Equipment _ Macbook Pro 15inches - Model 2018 Samsung T7 SSD Drive 2TB (Super Fast!) WD 4TB (Backups) MagicForce Mechanical Keyboard BenQ Monitor 24 inches Large Mouse Pad Gaming Chair Smooth Chair Wheels Recording _ Samsung Q2U Microphone Tonor Microphone Arm Sigma Lens 16mm f/1.4 Canon EOS M50 Mark II with 15-45mm Developing _ Visual Studio Code (IDE) Beyond Compare (Compare files) Sourcetree Github Client Sight (Chrome extension) Browserstack (Testing) Postman (for APIs) Screen Capture _ Xnapper (screenshots) Loom Kap Others _ TablePlus (Database)
# Webflow
# Which fullPage.js license should I choose?
There are different types of fullPage.js licenses. Depending on the type or purpose of your website you can choose one or another. Here are the most common cases:
Non-commercial purpose:
* Develop my personal/friend's website
* Develop multiple websites
* Website for NGO, NPO, or charity organizations
* None of the previous options
Commercial purpose:
* Develop my client's website
* Create a downloadable or installable product (HTML template, SDK, toolkit, etc.)
* Develop multiple client's websites
* None of the previous options
FullPage.js extensions licenses terms are different. Learn which license you need for your extension.
## Develop my personal/friend's website
If you plan to use fullpage.js on your portfolio or a friend's website, then you can choose any of the licenses from the pricing page:
* Hobby
* Professional
* Business
The Hobby license will allow you to use fullPage.js for 1 product and 1 developer, while the Professional or Business licenses are designed for unlimited products. The professional license allows 8 developers to use it and the Business one unlimited developers.
If you want to enjoy the benefits of premium support you should consider acquiring a Business license. It includes support by email, Skype/Zoom, or WhatsApp/phone regarding integration questions.
Also, your queries will be considered a high priority and the response time will be a maximum of 48 hours, usually less than 12. The Hobby and Professional licenses also include basic support, but just through the GitHub issues forum and require an isolated reproduction.
## Develop my client's website
Fullpage.js can be used on your customer or client website. Regardless if you are a company, entrepreneur, or self-employed the most appropriate licenses for this case are the Professional and Business licenses.
If this is your situation, there are 3 important features to ponder for your final decision:
* The number of developers working on the project: a developer or a team member is someone on your team working on a project that uses fullPage.js. This includes implementing, integrating, or changing the code. The Professional license allows its usage for up to 8 developers, while the Business is for unlimited developers.
* Email Premium support: only the Business license offers 3 months of premium support (Email / Zoom / call ) Also, your queries will be considered a high priority and the response time will be a maximum of 48 hours, usually less than 12. The Hobby and Professional licenses also include basic support, but just through the GitHub issues forum and require an isolated reproduction.
* Mobile apps: if you plan to use fullpage.js to develop a mobile app you would have to choose the Business license.
Both licenses allow you to use the code on an unlimited number of domains, servers, and projects, regardless of the environment type (production, staging, QA, etc.)
Take into account that this is not your option if your client's website will include a downloadable product developed with fullPage.js (like an HTML or WordPress template) In that case you'd have to choose the OEM/SaaS license.
## Develop multiple websites
If you plan to create more than 1 product that includes FullPage.js, you have to choose between the Professional or the Business licenses. These licenses allow you to use fullPage.js on an unlimited number of domains, servers, and projects, regardless of the environment type (production, QA, etc.)
With them, you do not have to worry about the number of websites. Check the case for personal use or the case for commercial clients to see which one fits you better.
There is an exception when developing multiple websites: when you plan to use fullPage.js in downloadable or installable products that require distribution rights. If you think this is your case, see the section below, please.
## Create a downloadable or installable product
If you plan to include fullPage.js in downloadable or installable products like WordPress themes (for example, the ones in marketplaces like ThemeForest, TemplateMonster, etc.) HTML templates, as part of a commercial interface builder, SDK, or toolkit, you should choose the Commercial OEM license that grants you distribution rights.
These Commercial OEM licenses are customized for each customer. A distributor fee is an annual or perpetual fee that is paid per product. Contact us if you need this license, providing us with as much information as you can about your future product:
* What's the name and URL of your product? Short description of it.
* What's its price?
* How many customers/downloads are you anticipating over a year?
* How many developers will be directly working with our software?
* How will your product(s) be commercialized?
## Website for NGO, NPO, or charity organization.
Non-governmental organizations (NGO), non-profit organizations (NPO) as well as charity organizations can choose any of the licenses from the pricing page (Hobby, Professional, or Business)
There are no discounts available for this kind of organization but you can use a free license. The only requirements are that your website/project is open-sourced and licensed under General Public License version 3 (GPLv3) or a compatible one.
If you are not sure whether you can use the GPLv3 license in your project or not, we strongly encourage you to ask a lawyer about it. We are not lawyers and we can't assist you here.
## None of the previous options
If your specific case is not explained in any of the previous options, then contact us to explain your personal situation. We will guide you so you can choose the license which fits better your circumstances.
## And what about Fullpage.js Extensions?
Good question! For fullpage.js extensions, you can check how to choose a license for your extension where explain this in detail.
# Can I hide fullPage.js license key?
Unfortunately, there's actually no way to completely hide front-end code. People can always inspect it from the browser developer tools.
The most you can do is obfuscate the source code and try to make it harder for others to find your fullPage license key.
However, you won't have to worry anyway! If they steal it from you they will be the one subject to the possible legal consequences and not you.
# How to fix 'Unlicensed extension' red error message
When you purchased the extension you received a file named "How to use it.html". You will find a detailed step-by-step guide on how to use and activate your extension.
If you are using a fullpage.js extension, you may find out that, once you publish your web, a red message appears in the upper-left corner of your site. This red error banner can show up for the following reasons:
* Your fullpage.js extension hasn't been activated
* You are using the wrong key for your extension
* You haven't typed properly the activation key
* You have not spelled properly the extension's activation key
* The domain doesn't match your activation key
## You have not activated yet your fullpage.js extension
Check the documentation on how to activate a fullPage.js extension and how to use them.
In order to activate an extension, you have to generate an activation key for a specific domain.
Before doing so we recommend testing them, as once activated you won't be eligible for a refund. Note that all fullPage extensions can be used without being activated. This allows you to fully test them before you decide to link with a specific domain and make your page public.
In order to generate an activation key, you'll have to follow the steps explained here. Then you'll have to add your activation key (when you initialize fullPage) by adding the option `[extension]Key` .
For example, if you have purchased the FadingEffect extension, you would have to add the following code:
```
new fullPage('#fullpage', {
fadingEffect: true,
fadingEffectKey: 'ACTIVATION KEY SHOULD BE HERE',
});
```
## You are not using the right key for your extension
You may have written down the license key on the `[extension]Key` option, instead of the activation key. Both keys are different:
* You get the license key when purchasing any extension.
* You get the activation key when activating your extension for a particular domain (unless using the Business License). In order to generate it, you need the license key for that particular extension.
The license key is composed of 4 groups of characters and the activation key has a variable length and no groups. It is important not to confuse each other.
## You haven't typed properly the activation key
The activation key has a variable length of characters. Sometimes the final character is a symbol like equal ( `=` ) or a question mark ( `?` ) When you double-click on the activation key to copy it the last character won't get selected and you might end up pasting an incomplete activation key in your code.
Check again the activation key sent to your email to see if this is your case.
## You have not spelled properly the option 'activationKey'
The fullPage option `[extension]Key` is spelled with a capital 'K' letter. If you write a lowercase 'k' you will get this red banner. `parallaxkey` - wrong `parallaxKey` - right
## You are using the extension on a different domain than the one you activated the extension for
Each activation key is linked to one domain and one domain only (except you are using a business license). If you are trying to use your extension in a different domain, you will get the error.
Remember that once you activate your extension you cannot change it for a new domain or de-activate it. If you want to use it on more domains you'll have to upgrade to the Professional or Business licenses. Of course, you will get the money for your previous license back ;). Just contact us if you want to upgrade.
If you still have problems, try using this online fullPage.js inspector to check your site.
# Can I change the activated domain of my extension?
Unfortunately, you can not change the domain for your fullPage.js extension. Once you activate your extension for a specific domain, it will be valid forever and there's no way for us to deactivate it . Note that you got a warning about this during the activation process for your extension.
We strongly recommend trying the extension before generating the activation key. This way you can fully test and if for any reason it doesn't meet your requirements you can then ask for a refund within 30 days of the purchase.
If the extension is not activated you will see a red warning message on your website, but the extension will still work like a charm ;)
## I activated my Hobby license without knowing this... what can I do now?
If you've activated your extension and you have no more activation key lefts, then you'll have to purchase another license for 1 domain (Hobby) or update your license to Professional or Business if you want to be able to activate more domains.
In case of upgrading contact us about it and we will refund you the previous license.
You can read more about the different licenses for each extension on the fullPage.js extensions page.
Basically, this is how it currently looks:
* Hobby license - 1 domain
* Professional license - 5 domains
* Business license - Unlimited domains
# I do not have a credit/debit card. Can I use PayPal to make the payment?
Yes, you can use PayPal to purchase our products in addition to a credit or debit card payment.
We use a secure and well-known platform for payment: Gumroad. That means that you don't have to worry about keeping your card's data secure. We won't have access to your card details.
# Why update to fullPage.js version 4
# Migration from fullPage v3 to fullPage v4
fullPage.js v4 is almost fully compatible with fullPage.js v3.
There are only a few changes that might affect you depending on your configuration:
## Breaking changes
These are the main breaking changes:
`licenseKey` is not compatible1. The option
fullPage version 3 and fullPage.js version 4 use a different kind of license.
In order to use fullPage.js version 4, you'll need to acquire a new one by buying it from the fullPage pricing page.
You can still use version 3 instead if you prefer not to update it.
`verticalCentered` is no longer creating a wrapper2. The option It uses flexbox to align the content vertically instead of `display: table-cell` .
# fullPage.js v3
```
.fp-section.fp-table, .fp-slide.fp-table {
display: table;
table-layout:fixed;
width: 100%;
}
.fp-tableCell {
display: table-cell;
vertical-align: middle;
width: 100%;
height: 100%;
}
```
```
<div class="section fp-table" style=""height: 503px;">
<div class="fp-tableCell" style="height: 503px;"> <!-- dynamically created wrapper -->
Vertically centerd
</div>
</div>
```
# fullPage.js v4
```
.fp-table{
display: flex;
flex-direction: column;
justify-content: center;
width: 100%;
}
```
```
<div class="section fp-table">
Vertically centerd
</div>
```
### 3. Extensions for v3 won't be compatible with v4
The way the license is validated has changed and most extensions have suffered from changes to adapt them to the new version 4.
Nevertheless, there will be a 50% discount for previous customers and those who purchased extensions in the past 3 months will be able to update for free. (We are still working on this, so the update won't be available right away)
`height` 4. Sections & Slides are no longer using fullPage.js v3 sections/slides using an inline attribute with the height of the viewport. ( `height="850px"` ). fullPage.js v4 sections/slides will now be using the automatically calculated value of `100vh` or `100%` (if `scrollOverflow` is necessary).
`v2compatible` has been removed.5. The option
This option was providing full compatible support for callbacks using the signature/params of fullpage.js version 2.
```
scrollOverflowOptions
```
has been removed6. The option
Because fullPage.js won't use the external vendor anymore, this option was also removed.
### 7. Support for IE 9 has been dropped
No comments :) Feel free to use fullPage.js v3 if you need support for legacy web browsers.
fullPage.js supports now all modern browsers and IE 11 (yes, we still want to provide the best experience for everybody)
## New features in fullPage.js v4
### 1. Customizable navigating arrows
Left and right arrows for slides are now customizable. You can pass the HTML code on an array of 2 values on the option controlArrowsHTML. First position for the left, 2nd for the right arrow.
```
controlArrowsHTML: ['<div class="fp-arrow"></div>', '<div class="fp-arrow"></div>']
```
`observer` .2. New option When `true` , fullPage.js will automatically detect changes on the DOM and re-initialize itself to adapt to the new DOM. Adding new sections/slides dynamically or scrollable content inside them is now super easy.
### 3. Hiding sections is now supported
fullPage.js will detect sections getting hidden when resizing the viewport and completely ignore them when hidden.
### 4. Normal page after fullPage.js
You will be able to scroll beyond the fullPage.js container and have a normal page underneath it.
`trigger` param in all callbacks5. Added a
```
afterLoad: function(origin, destination, direction, trigger){
console.warn("afterLoad. Trigger: " + trigger );
},
```
Now you can know what action is triggering the scroll. The `trigger` value can be:
*
`slideArrow` *
`verticalNav` *
`horizontalNav` *
`keydown` *
`wheel` *
`menu`
### 6. ScrollOverflow option won't require an external file
fullPage.js v3 required the
```
scrolloverflow.min.js
```
file on top of fullpage.js to use this feature.
fullPage.js v4 won't require this external file anymore. Saving the additional 37Kb and creating a better user experience by using the default scrolling bar.
Now you'll be able to add/remove content dynamically and the scrollbar will be automatically created or removed for you. Nothing to do from your side.
On top of that, you'll be able to fully control the scroll of the scrollable section and even track its position. More on that now.
### 7. Added new callback onScrollOverflow
This allows you to get the position of the scroll when scrolling inside a section/slide (when using `scrollOverflow: true` ).
```
onScrollOverflow: function(section, slide, position){
console.log(position);
},
```
`beforeLeave` 8. Added new callback It allows you to capture the event before giving it permission to move to the next section. Returning `false` on this callback will cancel the movement.
```
let cont = 0;
new fullPage('#fullpage',
// example preventing scroll until we scroll 4 times
beforeLeave: function(origin, destination, direction, trigger){
cont++;
return cont === 4;
}
}
```
`scrollOverflowReset` admits more values9. The option SollOverflowReset now admits non-boolean values too. ( `true` , `false` , `slides` , `sections` ). Using `slides` , for example, will only reset the scroll position to the top when changing to another horizontal slide and not when sliding vertically.
10. New option When using the scrollbar, as we are now using the default scroll, it can look a bit less stylish on windows devices. You can choose to use a more subtle scroll bar like the one used on Apple devices by using turning on
.
### 11. New methods
`getScrollY()` and `getScrollX()`
These two methods will provide a way to get the Y and X position of the whole page.
You can red more about them on the fullpage.js documentation.
## FAQs
* fullPage Suddenly Getting "Unlicensed Extension
* fullPage Suddenly Stopped Working And Page Broke
* Why update to version 4
* How to find out what version you are using
# Can I get a discount?
If you are a student, a non-governmental organization (NGO), or a non-profit organization (NPO), then you might be wondering if there is any chance to get a discount on fullpage.js products.
## fullpage.js library discounts
We do not provide discounts for fullPage.js. The Hobby license couldn't be cheaper for a lifetime product.
However, notice fullPage.js is under the GPLv3 license, which means you can request a free license for fullpage.js if your project is open source and it is licensed under GPLv3 or any compatible license. We check all requests to make sure you are eligible but it might take some time, as we do usually once a month.
## fullpage.js extensions discounts
We do provide some discounts for fulllPage.js extensions as long as you:
Purchase all the extensionsWe provide the extension bundle at a very reduced price. This package includes all fullPage.js extensions: current ones, and even future ones!
You'll be able to activate each of your extensions separately for a different domain so you can choose where to use each of the extensions.
Purchase more than 2 extensionsIf you purchase more than 2 extensions, you will get the following discounts:
* 15% discount for the Hobby license.
* 23% discount for the Professional license.
* 15% discount for the Business license.
Just contact us and let us know which extensions you need. We'll provide you with more details on how to proceed.
# Can I use fullpage.js on my client's website?
As a general rule, yes, you can use any of our fullPage licenses on your customer website.
Learn which license should you choose depending on your project purpose.
Just remember that fullpage.js licenses are non-transferable. By definition, you will be the one who owns the license and that has the right to use it.
If your client/company wants to own the license, they should be the ones buying it.
You can read more about it you can check out the fullPage.js license terms and conditions.
# How to autoplay vertical section in fullPage.js
You can play sections automatically in fullPage.js if you wish to do so. This way the carousel will autoplay itself and move to the next section every X seconds. Kind of like an automatic slider or carousel.
This is not a feature that comes out of the box with fullPage.js but you can definitely make it happen by using fullPage.js methods and callbacks. fullPage.js is ready for it!
Of course, you can also use autoplay for horizontal slides in fullpage.js, so check that out too if interested.
Before starting with the explanation, a codepen is worth more than a thousand words, so here you have an example you can copy-paste.
The way it works is quite simple. Basically, we are going to tell fullPage.js that every time it lands in a new section, we have to wait for 1 second ( 1000 milliseconds ) and then scroll down to the next section. And, if there are no sections left, we will scroll down to the 1st section in some kind of infinite loop.
So, how do we go about it? Simple too. We will be using the `afterLoad` callback to detect when we landed in a new section. Then we will be using the JavaScript function `setTimeout` to add a delay of 1000 milliseconds before telling fulpage.js to move to the next section with the method
```
fullpage_api.moveSectionDown();.
```
This show it looks:
```
afterLoad: function (origin, destination, direction) {
}
```
Notice how we are saving the interval ID on a variable called `g_interval` .The purpose is to make sure that if the user scrolls up or down to another section the delay (1 second in this case) will reset itself and start from 0 again. To do so, we clear the interval when leaving the current section by using the callback `onLeave` .
```
onLeave: function(origin, destination, direction){
clearInterval(g_interval);
},
```
And the last thing: we also used the fullpage.js option
```
continuousVertical: true
```
to make sure that once we reach the last section at the very bottom fullPage.js will keep on scrolling down to go back to the first section and creating this way the infinite loop.
And this is what the whole code looks like:
```
var g_interval;
new fullpage('#fullpage', {
licenseKey: 'YOUR KEY HERE',
sectionsColor: ['yellow', 'orange', '#C0C0C0', '#ADD8E6'],
continuousVertical: true,
onLeave: function(origin, destination, direction){
clearInterval(g_interval);
},
afterLoad: function (origin, destination, direction) {
}
});
```
## Preventing user scroll
If you want to disable the scroll of the user so the slider will not allow interactions and keep on scrolling, then you'll have to call the method
```
fullpage_api.setAllowScrolling(false);
```
after fullPage.js initialization.
```
new fullpages(.......)
fullpage_api.setAllowScrolling(false);
```
Here's a codepen:
# How can I upgrade my fullPage.js extension?
You might have purchased a fullpage.js extension and now you want to upgrade your license. For example, you purchased the Hobby license for one domain and now you want to activate a new domain. You have two options:
* You can buy a new Hobby license for your extension
* You can upgrade to the Professional or Business license.
Here are the steps you have to follow in order to request an upgrade of your plan:
* Purchase the upgraded license (Professional or Business) of your extension (not of fullPage.js!)
* Contact us and let us know that you want to upgrade. Try to include in your email your licenseKey and the email used to purchase it so we can identify your product.
* Done! We'll then proceed to refund you the previous purchase.
The refund terms of fullpage.js and its extensions do not apply in this case.
Note that after the upgrade won't be considered as a new purchase. The 30 days refund policy will take into account the date of purchase of the very first product. If more than 30 days passed since then, you won't eligible for a refund on the upgraded product.
# Can I use my fullPage.js license on multiple projects?
Yes, you can.
If you want to use fullPage.js, for multiple websites you have to choose the Professional or Business licenses. These 2 licenses allow you to use the code on an unlimited number of domains, servers, and projects, regardless of the environment type (production, QA, etc.)
This is different from the OEM/SaaS license and the Hobby License. The hobby license is only valid for 1 domain. When using the OEM/SaaS license you'll need a license per project that requires distribution rights.
# How to use lazy load for images and backgrounds in fullPage.js
If you wonder why your current lazyload library doesn't work on fullPage.js we have you covered here.
The reason why it doesn't work is because fullPage.js won't trigger the `scroll` event as it doesn't actually "scroll" the page in the conventional way.
Instead, fullPage.js simulates the scroll by using CSS transitions or javascript.
## Easy way
The easiest way to get your current lazyload technique to work together with fullpage.js is by using the fullPage.js option `scrollBar:true` . This way fullPage.js will show a scrollbar as other pages do and your libraries will work as expected.
## Using the fullPage.js lazy load option
fullPage.js provides a lazy load option by default so you do not have to add any extra scripts to your page. In fact, it will work even better in some scenarios such as lazy loading horizontal slides too or having full compatibility with its extensions.
If you want to this route, all you have to do is enabling the lazy load option `lazyLoading` (if it was off. It is active by default) and using the attribute `data-src` for the source of your images file instead of the conventional `src` . Here's an example:
```
<div class="section">
<img data-src="image.png" <video>
<source data-src="video.webm" type="video/webm" />
<source data-src="video.mp4" type="video/mp4" />
</video>
</div>
```
In this scenario, the image and the video in this section will only load once we reach this section.
It will actually start loading them whenever you leave the previous section for a faster load.
## Lazy loading backgrounds
When you use CSS backgrounds then we have to actually modify our CSS to get them to lazy load.
First of all, we will make sure to add a class, for example `lazy-loaded` every time we arrive at a new section. This way all sections we visit will now contain this class. The ones we didn't visit won't have it.
```
new fullpage('#fullpage', {
onLeave: function(origin, destination, direction) {
destination.item.classList.add('lazy-loaded');
}
});
```
All we have to do now is create or "conditional CSS" so we only show an image when the class is actually added to our section:
```
.section1{
background-size: cover;
}
.section1.loadImage{
background: url(image.jpg);
}
```
Of course, you won't have to add this "condition" to the backgrounds that are visible on page load.
Here's a CodePen example with lazy load backgrounds that we've created for you.
# How can I skip a section/slide?
Fullpage.js provides you a few methods (see them in action here)
Two of these methods will help you to move to specific sections or slides in your website:
```
moveTo(section,slide)
```
(codepen example) scrolls the page to the given section and slide *
```
silentMoveTo(section,slide)
```
(codepen example) like the previous one, but it performs the scroll without animation
You can set one of this method to a button or use the fullpage.js callbacks to fire it when you desire.
In order to skip sections we can use the `moveTo` method in the `onLeave` callback. Here's a codepen.
And here's the relevant code:
```
new fullpage('#fullpage', {
sectionsColor: ['yellow', 'green', 'purple', 'orange'],
//events
onLeave: function(origin, destination, direction) {
skipPages(origin, destination, direction)
}
});
function skipPages(origin, destination, direction){
if(destination.item.classList.contains('skip')){
if(direction === 'down'){
destinationIndex = destination.index + 2;
}else{
destinationIndex = destination.index;
}
console.log(destinationIndex);
setTimeout(function(){
fullpage_api.moveTo(destinationIndex);
});
}
}
```
In order to make it work for us we would have to add the class name `skip` on the sections that we want to skip.
For example:
```
<div id="fullpage">
<div class="section">Section 1 </div>
<div class="section skip">Section 2 </div>
<div class="section">Section 3 </div>
</div>
```
# How to disable fullpage.js in mobile devices
If you want to turn off fullPage.js scrolling on mobile devices you will have to use the `responsiveWidth` or `responsiveHeight` options on the fullPage initialization. (Using WordPress?)
Remember, nowadays there's no such thing as a "mobile" device anymore. There are touch screen computers with high resolutions, and touch screen tablets with higher resolution than some computers.
So, the way to deal with small screen devices is dealing with resolution thresholds.
Here's an example of how to disable fullPage.js for devices with less than 800px width:
```
new fullPage('#fullpage', {
responsiveWidth: 800
});
```
This way your page scroll will be fully controlled by fullPage in normal conditions, meaning it will scroll snap to sections and adjust them so they will always fit the viewport.
However, if the resolution is lower than 800px width fullpage.js auto snap behavior will get disabled and you'll be able to scroll as you usually do on any website.
Note that you might want to use the `viewport` meta tag on your HTML header part so it will be zoomed accordingly. Check out more about the viewport meta tag here.
```
<meta name="viewport" content="width=device-width,initial-scale=1">
```
## Turning off full-height sections
You might also want to have sections that are smaller or bigger than the viewport on those same devices where you disabled fullpage. No problem!
Add the class
on those sections where you want to disable full-height and they will take the height defined by your section/slide content.
Here's an example:
```
<div class="section">Whole viewport</div>
<div class="section fp-auto-height-responsive">Auto height only on responsive</div>
```
Check out this codepen example.
## Additional styles on responsive
In addition, fullpage.js will automatically add the class `fp-responsive` on the `body` element of the site when you enter into responsive mode. Meaning, when we reach a width/height lower than the threshold we defined on fullPage initialization.
This way we can use this class instead of creating media queries if we prefer.
```
.section1{
background: red;
}
/* Applied only on responsive */
body.fp-responsive .section1{
background: yellow;
}
```
## Detecting mobile mode on JavaScript
When entering into the responsive mode fullPage will also trigger a callback that we can use to add our own logic on the JavaScript side:
```
new fullpage('#fullpage', {
afterResponsive: function(isResponsive) {
if (isResponsive) {
console.log("I'm in responsive mode");
}
}
});
```
## Disabling fullPage.js on Mobile on the WordPress plugins
The procedure is just the same, but in this case you'll be able to do it all from the WordPress panel.
It's as easy as adding a value in pixels on the Responsive Width option. For example `750` (or any other threshold you prefer). So, in this example, when the viewport X dimension (the window width, or the screen width of the mobile device) is less than 750px, your page will behave as a normal scrolling website without fullPage.js snap scroll.
To turn off full-height sections, check out the Responsive Auto-Height option for the sections.
# Where to use fullPage license
In order to use FullPage legally, you need to have a license key.
Once you purchase a fullPage license you will receive an email from Gumroad with a license key. (If it's not on your inbox check the SPAM folder.)
You might be wondering where and how to use this license number in your source code.
Well, it is pretty easy. As detailed on the fullPage documentation, you'll just have to add the option `licenseKey` when initializing fullpage.js: `licenseKey` : (default `null` ). This option is compulsory.
* Be aware of the spelling: the letter "K" is in uppercase! (licenseKey)
*
If you do not add this option (or you spell it wrong) the console javascript will warn you with this error: Fullpage.js version 3 has changed its license to GPLv3 and it requires a licenseKey option.
Here is the relevant code that you have to add to your source code:
If you are wondering how you can hide the licenseKey on your fullPage.js initialisation, the short answer is: you can't.
And if you are a bit confused about what license to need, check this article regarding which fullPage.js license you should get.
# How to autoplay horizontal slides in fullPage.js
If you want the horizontal slides in your website to autoplay every X seconds, this article is for you.
We'll be explaining here how you can configure fullPage.js to force the automatic animation of one or multiple sliders on your page.
If you prefer to go straight to the example and skip the explanation here's a codepen with the demo:
And now let's go with the explanation for those who want to understand what's going on there.
We will be using the `afterLoad` callback for that. This means that whatever we write inside this function will be executed every time we change from one vertical section to another. As always, you can find all this information on the fullpage.js documentation.
So, the callback looks like this:
```
// executed every time we land on a different vertical section
afterLoad: function (origin, destination, direction) {
// your code
}
```
Why are we using `afterLoad` and not `afterSlideLoad` ? Because `afterSlideLoad` doesn't get fired on the first slide of the section. So, every time we land on a different vertical section we are going to check if this section contains any horizontal slider, and when it does, we are going to set an interval every 1 second.
Intervals allow us to repeat the execution of any code inside them based on a given time on milliseconds.
Here's an example of how this function looks like:
```
// executed every 1000 millseconds
setInterval(function () {
console.log("hello");
}, 1000);
```
Now, what do we want to call every 1 second? In this case, because we want the slider to move to the right repeatedly we will call the fullpage.js method
```
fullpage_api.moveSlideRight
```
. What this does is exactly what its name describes: moving the slider to the right. So, for example, from Slide 1 to Slide 2. Now, this interval never stops and it will start another instance of itself with every section with a slider. So, in order to fix this we will store a reference to it on a variable we can `g_interval` so we can clear it when landing on every vertical section. This way we will remove the previous interval and create a new one exactly from the second 0 whenever we need it.
Now, let's combine altogether and integrate it on a fullPage.js initialization:
This code will make sure that every single horizontal slider on your page autoplays itself.
But... what if you only want some horizontal sliders to autoplay?
## Autoplay only some sliders and not all
In this case, we need to specify somehow which ones we want to play automatically. In order to do so, we can use the HTML markup to make things easier and cleaner.
We will add the attribute `data-auto` on every section with horizontal slides that we want to autoplay. We won't check its content so it won't matter what value you use for it, but it is a common practise to use boolans, so let's use `data-auto="true"` for it.
Now we will be checking if the section where we land has such attribute and we will only start the automatic animation when it's the case.
Here's the code:
```
<div id="fullpage">
<div class="section">
<div class="slide" data-anchor="slide1">Slide 1.1</div>
<div class="slide" data-anchor="slide2">Slide 1.2</div>
<div class="slide" data-anchor="slide2">Slide 1.3</div>
</div>
<div class="section" data-auto="true">
<div class="slide" data-anchor="slide1">Slide 2.1</div>
<div class="slide" data-anchor="slide2">Slide 2.2</div>
<div class="slide" data-anchor="slide2">Slide 2.3</div>
</div>
<div class="section">Section 3</div>
<div class="section">
<div class="slide" data-anchor="slide1">Slide 4.1</div>
<div class="slide" data-anchor="slide2">Slide 4.2</div>
<div class="slide" data-anchor="slide2">Slide 4.3</div>
</div>
</div>
```
JavaScript:
```
var g_interval;
// 1000 milliseconds lapse
const lapse = 1000;
const shouldAutoPlay = destination.item.hasAttribute('data-auto');
const hasSlides = destination.item.querySelectorAll('.fp-slides').length;
And here's the codepen example:
# Generating an invoice for fullPage.js
Do you need a quote for your fullPage product? No problem! You can generate your own invoice by clicking the link at the bottom of your receipt!
Just go to your purchase confirmation mail, scroll to the bottom and you'll find a button to generate your invoice.
Note that you'll be able to modify it as many times as you need.
You will be redirected to a page on which you can enter your full name and business or personal information such as VAT number. A PDF file will get generated.
If you do not find your purchase confirmation mail, please, contact us and we will re-send it to you as soon as possible.
More information about requesting invoices here.
If you have any question or problem regarding invoices and billing, please contact <EMAIL> to reach the selling platform. Gumroad is the platform responsible for the payment and they'll be able to help you out. Make sure to provide them with the details of your purchase such as email or license key.
# Where to get fullPage.js files
Wondering how to get fullpage.js files? You have different ways to do it, you can choose the one that you prefer or the one that best fits your needs.
## Download it or get the code on Github
You can simply download fullPage.js from the main page on a zip file. It will include examples and a README with the docs.
You can also get the code directly from Github. It will be exactly the same.
## Use a CDN
If you prefer to use a CDN to enjoy the advantages of caching and slow response times across the globe, you can use any of the most popular ones such as CDNJS, unpkg, jsdelivr. It doesn't really matter much which one you use as most of them have a huge adaption and great content delivery network.
You can use the CDN to get both the JavaScript file and the CSS file.
## Use package managers like npm or bower
You can also use services like npm and bower (this one not really recommended anymore).
More information about it in the docs.
Once you download the files you will have to use your license key on the fullpage initialisation.
# Use fullPage responsive mode and keep snap scroll effect
By default, fullPage.js assumes that you want to disable fullPage.js snap scroll feature on small screen devices. This is what usually most people want and what makes things easier for developers. However, you can also use fullPage.js `autoScrolling` feature on responsive mode and enjoy some of the responsive mode features, such as making use of the Responsive Slides extension that turns horizontal slides into vertical sections.
So, how do we do this?
We first have to define when to enter into responsive mode by using the fullpage.js options `responsiveWidth` or `responsiveHeight` . When the viewport size is smaller than the value in pixels defined in those breakpoints, then fullPage.js will enter into the responsive mode, which will automatically disable the snap scroll. Now, it will scroll normally like any other page. If you want to keep the snap scroll feature while on responsive mode, then use the method
```
fullpage_api.setAutoScrolling(true)
```
inside the `afterResponsive` callback.
Here's an example:
```
new fullpage('#fullpage', {
licenseKey: 'YOUR KEY HERE',
responsiveWidth: 1000,
afterResponsive: function(isResponsive){
if(isResponsive){
fullpage_api.setAutoScrolling(true);
}
}
});
```
Demo online and codepen here:
# How to fix error: fullPage.js version 3 has changed its license to GPLv3
This error message appears when you that are using fullPage.js version 3 without the required license.
Notice fullPage.js now is on version 4. Check the available fullPage.js licenses and learn which one you should choose.
In order to get rid of it, you will need to get a valid license key for fullpage.js. Then, as detailed on the fullPage.js documentation, you'll have to use the option `licenseKey` when initializing fullpage.js and introduce your key between quotes.
Or, if you are using jQuery:
```
$('#fullpage').fullpage({
licenseKey: 'YOUR_KEY_HERE',
});
```
## I put the license but I still get the error
If you already added the `licenseKey` option, but you still see the warning message, check if you have spelled it properly. The fullPage option `licenseKey` is spelled with a capital 'K' letter. If you write a lowercase 'k' you will get the undesired error. `licensekey` - wrong `licenseKey` - right
Also, make sure there's no whitespace between the license key quotes and that you introduced the full key.
If you've done all those checks and you still get the error make sure you are not initializing fullPage.js more than once on the page.
These are the most common cases and they will very probably solve your issue.
If you still have problems, check your website using the online fullpage.js license inspector
# Make an element scrollable inside a fullpage.js section or slide
If your page has scrollable elements like modals or containers with overflowing text, then you'll need to tell fullPage.js about it. This way, fullpage.js will allow normal scrolling inside those elements and will prevent fullpage.js sections from scrolling up or down in these cases.
You'll be able to scroll within certain elements you define. Scrolling inside those elements will get enabled without affecting the sections/slides scroll.
This will work for both, mouse wheel scrolling and swiping on touch screen devices.
## Using normal scroll elements
fullPage.js has an option called `normalScrollElements` that you can use in these cases. Here's an example:
```
new fullpage('#fullpage', {
normalScrollElements: '#element1, .element2'
});
```
As the documentation points out, it expects a string with the Javascript selectors for those elements. In this case our elements will have `id="element1"` and the `class="element2"` Warning! This option should not be applied to any section/slide element itself. If you need the content of the whole section or slide to be scrollable, use the option
`scrollOverflow: true` . See this example.
You can play with this example using the normalScrollElements option:
# Make a section scrollable in fullpage.js using scrollOverflow
fullPage.js provides a way to enable the normal scroll so you can scroll inside sections or slides when the content is bigger than the height of the section. Check this scrolling example to see how it looks.
If you want a scrollable section you have to set the `scrollOverflow` option to `true` during the initialization of fullPage.js
```
new fullpage('#fullpage', {
//options here
scrollOverflow: true
});
```
By default, fullPage.js will create a scrollbar on large sections or slides. If you want to prevent this behavior in specific sections or slides, use the class `fp-noscroll` on them. For example:
The scroll bar will automatically get disabled in responsive mode and the section will take the height that it needs to accommodate its content. This can be useful for the responsive mode of mobile devices.
## Make an element scrollable, not the full section
If your page has scrollable elements like modals or containers with overflowing text, you might want to be able to scroll within those elements without triggering the scroll of the whole page or the slides on the active section.
In this case, you would have to use a different option named `normalScrollElements` . Check in this article how to make an element scrollable inside a fullpage.js section or slide.
You might also find these articles relevant:
# Find what fullPage.js version you are using
Date: 2022-04-12
Categories:
Tags:
fullPage.js has recently experienced a major update. It moved from version 3 to version 4 introducing some new features, fixing existing bugs, and improving its code.
You can see the relevant changes of fullPage.js version 4 on the releases page.
## How to check what fullPage.js version you are using
The easiest way to find out what version of fullPage.js you are using is by opening the fullpage files loaded on your page. Both, the JS and CSS file.
You'll notice they have some credits on the very top containing the fullpage version number:
If for any reason those credits were removed, you can still find your version number.
From the web browser Dev Tools, run the following command in your JS console:
`fullpage_api.version`
This will return you the version number of the JavaScript file:
Note this won't return the version of the CSS file, so make sure to check this by looking at the credits of that file too.
If you don't want to get into technicalities, there's an easy way to know what version of fullPage.js you are using by checking when you acquired the commercial license.
Let's get into it.
## You are using the WordPress version if...
If you have purchased the official WordPress plugins: fullPage.js For Elementor or fullPage.js For Gutenberg.
## You are using fullPage.js version 4 if...
If you have discovered fullPage.js or you purchased fullPage.js (or its extensions) after the 12th of April 2022, then you are using fullPage.js version 4.
## You are using fullPage.js version 3 if...
If you were using fullPage.js before the 12th of April 2022, then you were using fullPage.js version 3.
## Are fullPage.js version 3 and version 4 compatible?
They are not fully compatible. There are a few breaking changes that might affect your page.
You have to take these into account before updating your site to fullPage.js version 4.
Check out the list of breaking changes here.
Some of the most important ones are:
* fullPage.js extensions of version 3 are not compatible with fulllPage.js version 4.
* fullPage.js license keys for version 3 are not working on fullPage.js version 4.
## Can I get fullPage.js extensions for version 3?
For those who want to keep on using fullPage.js version 3 instead of switching to fullPage.js version 4, we still provide fullPage.js extensions for it.
You can find them together with the extensions for version 4 when you acquire them from the fullPage.js extensions website.
You will find them inside a ZIP file called `for-fullpage-v3.zip` .
You might also find these articles relevant:
# Tracking sections and slides with Google Analytics and fullPage.js
You can track your site's visitors using Google Analytics by using PageView events. Just use the `ga` function to send a `pageview` event by using the fullpage.js callbacks, such as `afterLoad` or `afterSlideLoad` .
Here's an example:
```
function sendPageView(pageTitle){
ga('send', 'pageview', { 'page': document.location.href, 'title': pageTitle });
}
Notice that you'll have to first include Google Analytics script as usual and load it before the fullPage.js initialization. The send event only gets recorded if you first define the Google Analytics ID.
So, a full example could look like this:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Your Page</title>
</head>
<body<!-- Google Analytics -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-XXXXX-Y', 'auto');
ga('send', 'pageview');
</script>
<!-- End Google Analytics --<script>
function sendPageView(pageTitle){
ga('send', 'pageview', { 'page': document.location.href, 'title': pageTitle });
}
</body>
</html>
```
Make sure to replace `UA-XXXXX-Y` with your own Google Analytics ID for your page.
You might also find these articles relevant:
# fullPage Suddenly Getting Unlicensed Extension [Solution]
If you are changing your site or just purchased your extension and you are getting an "Unlicensed extension" message, then check out How To Fix Unlicensed Extension Message.
If, however, you have not changed anything on your site and you are suddenly getting this message, here's probably why.
## The problem
You are probably a CDN like https://unpkg.com/ to load the JS and CSS files for fullPage.js.
```
<!-- Loading the fullpage.js CSS file -->
<link rel="stylesheet" type="text/css" href="https://unpkg.com/fullpage.js/dist/fullpage.min.css" /<!-- Loading the fullPage.js extensions JS file (instead of fullpage.min.js) -->
<script type="text/javascript" src="https://unpkg.com/fullpage.js/dist/fullpage.extensions.min.js"></script>
```
The problem with this kind of CND link is that they do not specify the version of the script they are trying to load, so the CDN assumes you want to load the latest version of the script.
You can check this by yourself by accessing (https://unpkg.com/fullpage.js/dist/fullpage.min.css) in your browser. Notice how you get redirected to another URL containing the version number on it. Something like this: https://unpkg.com/[email protected]/dist/fullpage.min.css
This is where the problem is and the main reason why it's not usually recommended to use this kind of CDN links.
You usually want a CDN link for a specific version of the library because future versions might introduce breaking changes. Specially when switching a major version like 3.1.2 to 4.0.5. (Major versions tend to reflect breaking changes following the Semantic Versioning guidelines).
## The solution
If you were using fullPage.js version 3, for example. Then make sure to specify the latest version 3 on the CDN URL.
The latest version 3 is 3.1.2 (You can check this from the fullPage.js releases page).
So the links you would need to use would be the following ones:
```
<!-- Loading the fullpage.js CSS file -->
<link rel="stylesheet" type="text/css" href="https://unpkg.com/[email protected]/dist/fullpage.min.css" /<!-- Loading the fullPage.js extensions JS file (instead of fullpage.min.js) -->
<script type="text/javascript" src="https://unpkg.com/[email protected]/dist/fullpage.extensions.min.js"></script>
```
## But... I want to update to version 4?
Version 4 is a major update and therefore it's not free to update from version 3.
If you want to update to version 4 you must first check the breaking changes introduced in version 4 to see if any affect you.
If you are not sure what version you are using, check out how to find out what version of fullPage.js you are using.
Then, you will have to:
* Purchase a new license for fullPage.js version 4 from the pricing page.
* Re-purchase any of your current extensions for version 3, but this time, for version 4 from here.
* Active those extensions for the specific domains (You'll get the instructions by email)
## Why update to version 4?
# fullPage Suddenly Stopped Working And Page Broke [Solution]
If you are changing your site or just purchased your extension and you don't manage to make fullPage.js work, check out the examples provided on Github or see the Codesandbox demos provided for every method and callback in the documentation. (Like this one)
If, however, you have not changed anything on your site and you are suddenly getting errors on your page or fullPage.js broke your site, here's probably why:
You are using CDN and the link is automatically loading the latest version, forcing an unwanted update on your site that can lead to issues when the new update contains breaking changes.
You can read more about it and how to solve it in this other article.
# How to delay scroll & wait until animation is done on fullPage.js
If you want to delay fullPage.js animation so the scrolling can take place once an animation is finished, for example, you can use the `beforeLeave` callback.
If you return a `false` value on this callback, the scroll won't take place.
You can use this feature to add your own logic within this callback and control when the scroll will get triggered.
Here's a quick example that will require you 4 scrolls before allowing you to scroll to the next section:
So, how can you apply it so you can scroll after the animation takes place?
Here's a demo:
# Can I use fullPage to build comercial products to sell?
If you own a business or are just a personal developer making products and services, you may be wondering if you can use fullPage.js or the official WordPress plugins for fullPage (Gutenberg, Elementor, and very soon Divi) to create commercial products for you or your clients that you can then re-sell or distribute in any form.
The short answer is yes. However, you will need to contact us to acquire a Commercial OEM license for that as detailed on the pricing page.
This license will grant you distribution rights, meaning you can commercialize your product using fullPage.js. This license gives you permission to include Fullpage in:
- Downloadable or installable products: like WordPress themes, HTML templates, as part of a commercial interface builder, SDK, toolkit, or mobile app.
- SAAS products: Software as a service products where users need to login to get access to the services your application/website provides. Such as web builder, a mail delivery service, a marketing app online, etc.
Commercial OEM licenses are customized for each customer depending on your product price, the number of possible clients/downloads, etc.
If you think you need this license, please, contact us.
# Why my scroll based animations or events don't work?
Some WordPress plugins or Javascript components that depend on scrolling mechanisms such as sticky menus, animations on scroll, parallax effects etc. might not work if the browser's scroll bar is disabled.
In order to make them work on the fullPage.js plugin for WordPress, make sure to enable the scroll bar on the fullPage.js configuration by setting to `active` the option "Scrolling > Scroll Bar".
This option determines whether to use scrollbar for the site or not. In case of using scroll bar, the autoScrolling/snap functionality will still work as expected. The user will also be free to scroll the site by dragging the scroll bar and fullPage will fit the section in the screen when scrolling ends.
* Use fullPage responsive mode and keep snap scroll effect
* Make an element scrollable inside a fullpage.js section or slide
* Make a section scrollable in fullpage.js using scrollOverflow
* fullPage Suddenly Stopped Working And Page Broke [Solution]
* How to delay scroll & wait until animation is done on fullPage.js
# Can I move my WordPress plugin license to another domain?
If you are moving your website to another server or another domain or you feel obliged to do it for whatever reason, do not worry! You can transfer and exchange your license from your old domain to your new one.
You just have to follow 2 steps:
* Deactivate your license on your old server.
* Activate your same license on your new server.
The process is the same for all WordPress plugins and it is detailed on their documentation too:
* Deactivate a domain in fullPage.js plugin for Elementor.
* Deactivate a domain in fullPage.js plugin for Gutenberg.
Here's a video tutorial explaining the whole process:
# I can't use or see any extensions after installing the WordPress plugin
Did you install a fullpage.js plugin for WordPress but you cannot use or activate any extension?
That's normal. Fullpage.js extensions do not come with the plugin. They are sold separately. So, if you haven't purchased any extensions, none of the options for extensions will be available on the control panel. They will all be disabled and you won't be able to use them.
If you want to use any extension first you would have to purchase it from the fullpage.js extensions page.
Get a HUGE discount with the extensions bundle!
Once you get an extension you'll receive a license number that you can use on WordPress to use your extension. You can read more about it in the documentation for each of the WordPress plugins:
* Install/activate extensions in fullPage.js plugin for Elementor.
* Install/activate extensions in fullPage.js plugin for Gutenberg.
# I can't see my header and footer on my WordPress site
By default, fullPage for Elementor or Gutenberg comes with an empty page template. This means that, while fullPage is enabled on your WordPress site, an empty page with no theme dependency will be loaded. That's why your header or footer does not appear on your site.
If you want to show your header and footer again, you just have to disable the 'Enable Empty Page Template' option. This will load your theme template again. Take into account that after disabling this option you might need to know CSS and JavaScript to change the behavior of the page.
Here are two tutorials that explain how to disable the "Empty Page" option.
## Disable "Empty Page Template" on Elementor
## Disable "Empty Page Template" on Gutenberg
# I can't scroll to the footer of my WordPress site
As long as the scrollbars of your WordPress site are disabled, only the content inside the sections will be visible. That's the reason why you can't scroll to the footer when using fullpage.js in WordPress.
So the solution is as simple as moving the footer inside a section. But do not worry, you do not have to do it manually Our WordPress plugins for Elementor and Gutenberg already offer an option to do it: the "Show Theme Footer" option.
This option will try to move the specified site footer as an auto-height section (meaning not full-screen) at the very end of your page. Remember that the 'Theme Footer Selector' option should be given as a valid JavaScript selector, for example `.footer` , `footer` , `#footer` etc.
You might also find these articles relevant:
# Sections are getting cut off after installing the WordPress plugin
Some of the content of your section might be left outside the viewport if:
* Scrollbars are disabled.
* And the content of your section is bigger than the screen height.
The solution is to enable the Scroll Overflow option.
This will create an in-section scrollbar in your section/slide in case its content is bigger than the height of it. When set to true, your content will be wrapped by the plugin.
Check the documentation about the Gutenberg ScrollOverflow option and Elementor ScrollOverflow option
You have different options to customize the behavior of the scrollbars:
Show Scroll Overflow Scrollbars: the scrollbars will be visible when enabled. Otherwise, it will be not visible, however still able to scroll inside the section.
*
Fade Scroll Overflow Scrollbars: Fades the scroll bars when not used.
*
Interactive Scroll Overflow Scrollbars: The scrollbars can be clickable and scrollable by mouse when enabled
# How to remove anchors from URL on my WordPress plugin
In the fullPage.js plugin for Wordpress anchors will be visible by default on the URL of your site: your-domain.com/ `#anchor`
If you prefer to remove them, you can use the options Disable Anchors or Lock Anchors. This way you'll remove the anchor part from the URL.
However, note that when not using anchors you won't be able to have internal links to different sections/slides. So, for example, if you have a link or button using an anchor (for example:
```
your-domain.com/#page2
```
), this link won't slide to the section assigned to the anchor `#page2` anymore. That's because there's no way to identify sections with anchors without actually using anchors :)
If you still want to remove anchors from your URLs and you are not using internal links, you have two ways of doing this:
## Disable Anchors
This parameter determines whether to enable or disable all section anchors. This is the option we recommend for most people.
## Lock Anchors
This option determines whether anchors in the URL will have any effect at all in the library. This means that internal anchors can still be used but they won't have any effect on the fullPage.js behaviour if the URL gets updated with an anchor.
The only difference with the Disable Anchors option is that you can still use the anchors internally. So for example, if you were using the fullpage.js callback options such as `onLeave` or `afterLoad` , you will still be able to use the anchor variable. Can be useful for people who know basic JavaScript knowledge and that are actually using them in their code.
You might also find these articles relevant:
* How to fix 'Unlicensed extension' red error message
* How to fix error: fullPage.js version 3 has changed its license to GPLv3
* Can I use fullPage to build comercial products to sell?
* Can I move my WordPress plugin license to another domain?
* I can't use or see any extensions after installing the WordPress plugin
# Error: FullPage Block Contains Unexpected or Invalid Content
Do you see the error "This block contains unexpected or invalid content" instead of your FullPage block?
This error can appear after updating WordPress, Gutenberg, or even third-party plugins. But not to worry! The solution to it is quite simple.
In order to solve it, please, follow these steps:
* First, before doing anything, make a backup of your database. This is just for security reasons.
* Then use the
```
Attempt Block Recovery
```
button. This should solve the problem. Careful! Do NOT click on the
```
Convert to Classic Block
```
# Can I use Responsive Mode on my WordPress plugin?
If you have a normal scrolling website on mobile or tablet and fullPage snap scroll on desktop, you just have to activate the Responsive Mode.
You can find this option under Settings > fullPage > Design.
For example, you can set it to `767` . This means that for the screens with a viewport of less than `767px` , a normal scrolling website will be shown. And when I mean a normal scroll website I mean a website with normal scroll instead of the fullPage.js snap scroll (called autoScrolling on fullPage.js terms).
## Changing CSS styles on responsive mode
When you use this Responsive mode the class `fp-responsive` is added to the `body` tag in case you want to use it for your own responsive CSS. You might want to change the color of the background on responsive mode, then you can easily use conditional CSS styles tod so, for example:
```
body{
background: black;
}
body.fp-resonsive{
background: white;
}
```
And you can go a bit further and even use this class in child elements. Let's say that you want to change the nav menu font size on responsive mode. You could do this:
```
nav{
font-size: 12px;
}
/* On Responsive Mode */
.fp-responsive nav{
font-size: 20px;
}
```
## Removing full-height restriction on responsive mode
You need to allow sections to be bigger than the viewport height when on a small screen device? No problem! FullPage has you covered too!
You can make use of the Responsive Auto-Height option for the sections.
In order to use it, click on one of your sections to see the options and you'll find the Responsive Auto Height option on the fullPage menu:
Now, as long as you define the Responsive Width or Responsive Height options mentioned above, your sections will become as big (or small) as your content needs. They won't be forced to be full-height anymore and you'll enjoy a fully responsive experience!
## Turning horizontal sliders into vertical sections
The responsive mode won't touch your horizontal sliders. They will still work the same way.
However, if you need them to become vertical sections you can do this by using the Responsive Slides extension for that (Also included in the bundle with all extensions). This extension will convert those horizontal slides into vertical sections so you can scroll them normally too. Ideal if you don't want your visitors to miss any piece of valuable information.
You can turn the extension on on
```
Settings > fullPage > Extensions
```
and you are ready to go!
You might also find these articles relevant:
# Can I use fullpage.js with Divi?
Check also the fullpage.js WordPress plugins for Elementor and Gutenberg! As well as this theme by Themify.
Yes, fullpage.js is compatible with Divi. Very soon we will release a WordPress plugin for Divi. If you want to be notified when it is released just subscribe here.
Meanwhile you can manually integrate fullpage.js in your website. All the information you need to do it is in the fullpage.js documentation. You can use one of the available examples as a template.
Notice that fullpage.js will not work just by uploading the fullpage `.zip` file as if it was a WordPress plugin.
You might also find these articles relevant:
# How to change domain for fullpage.js plugin for Elementor
On fullPage plugin for Elementor you can change your licensed domain for another one.
You might want to do it because you are migrating your page, because you want to test it on your staging domain before moving life or because you've reached the limit of domains and you want to deactivate an old domain.
Here's a video tutorial explaining how:
The steps are the following:
* On the Elementor menu on the left, choose "fullPage.js Plugin for Elementor"
* Under "Plugin Activation" click on the button "Deactivate"
In order to activate your new domain do the reverse:
* On the Elementor menu on the left, choose "fullPage.js Plugin for Elementor"
* Under "Plugin Activation" introduce the license key that you got on your purchase confirmation email and click on the "Activate" button.
# Use fullpage.js on Oxygen Builder for wordpress
Check the official fullpage.js WordPress plugins for Elementor and Gutenberg! (and soon for Divi)
Fullpage.js is fully compatible with Oxygen Builder for WordPress. We do not offer an official plugin for it (yet...) but you can manually integrate fullpage.js into your website following the fullpage.js documentation.
Oxygen also offers a video-tutorial to use fullpage.js on its editor:
And if you want a whole theme, we also offer a theme by Themify.
# Are Elementor and Gutenberg plugins compatible with any WordPress builder?
No. The WordPress plugin for Elementor is only compatible with the Elementor builder editor. The same happens with the Gutenberg plugin, which only works with the Gutenberg editor.
However, we will soon release a WordPress plugin for Divi! You can subscribe here to get notified of the launch.
Remember that you can also use fullpage.js with other editors, but you would have to integrate fullpage.js manually following the fullpage.js documentation.
For example, Oxygen provides a tutorial to use fullpage.js with its builder.
# Can I install the Elementor or Gutenberg plugin in any theme?
Yes, all official fullpage.js WordPress plugins (Elementor, Gutenberg, and soon Divi) can be installed with any theme.
However, if you need to use theme features, such as a header or a footer, you might need to know CSS and maybe JavaScript, depending on your theme and your design.
# Can I get a refund for the WordPress plugin?
If you've purchased the official WordPress plugins for Elementor or Gutenberg you can get your money back as long as you meet these requirements:
* The refund request is made within thirty (30) days from the purchase date.
* You did not ask for support.
In order to get a refund simply ask us for a refund and make sure to provide us your license key (or the email used for the purchase).
Note that the previous conditions do not apply in case you asked for reimbursement because you upgraded to a superior license. In that case, check how to upgrade to another license for the WordPress plugins for fullPage.js.
# Can I upgrade my WordPress plugin license?
If you have purchased the official fullPage.js plugin for Elementor or Gutenberg and you purchased the Hobby or the Professional license, then you can opt for an upgrade if you wish to do so.
In fact, it is very easy. There are no special conditions. You just have to follow these steps:
* Proceed to purchase the upgraded license (Professional or Business license)
* Activate your current domain with the new license. (important if you don't want to get an unwanted message when we proceed with the refund)
* Contact us and let us know that you want to upgrade. Try to include in your email your licenseKey. This will help us identify your product.
* We'll then proceed to refund you the previous purchase.
Notice however that you won't be eligible for a refund if your previous product was purchased more than 30 days ago.
Also, when you upgrade to a Professional license, you'll have one domain less than usual as the old one will get moved to your new license.
# Use fullPage.js on Beaver builder for WordPress
Check the official fullpage.js WordPress plugins for Elementor and Gutenberg! And soon for Divi!
Some people ask us if they can use fullPage with Beaver Builder for WordPress.
And the answer is yes! As long as you integrate it by yourself there should be no problem.
Unfortunately, right now we don't offer an official plugin for Beaver, which will make it a bit more difficult for you. You will need some technical knowledge but you should be able to implement it as long as you follow the required structure and initialize it properly as detailed in the fullpage.js documentation.
And if you want a whole theme, you can use the theme by Themify.
Our plugins are fully maintained, updated, fully integrated with fullpage.js extensions, and provide top quality support. If you don't mind switching to another WordPress editor there's no question. Those will save you a ton of time and you'll be able to fully customize fullpage.js to your specific needs. And if you are not happy within 30 days we provide you a full refund.
fullPage.js
JavaScript & jQuery plugins
fullPage.js for Vue
fullPage.js for React
fullPage.js for Angular
WordPress Theme
WordPress Plugin for Elementor
WordPress Plugin for Gutenberg
WordPress Plugin for Divi
Pricing
Extensions
JavaScript documentation
Elementor documentation
Gutenberg documentation
Webflow tutorial
GitHub issues forum
Ask in StackOverflow
Twitter
Blog
Contact Us
# Use fullPage.js extensions on Webflow Builder
If you don't know yet how to use fullPage.js in webflow, check the tutorial and the video.
In order to use fullPage.js extensions in Webflow all you have to do is import the content of the Javascript file that you acquired with the purchase of the extension.
When using any of the paid plans that webflow provides you can make use of what they call custom code.
Go the the `Pages` menu on the top left. Under our `Home` page settings we’ll see the `Custom Code` menu element. If you followed the steps on the previous tutorial on how to use fullPage.js in Webflow you'll have some content in both, the `<head>` porition and the `</body>` tag: The content on the `</body>` part should look similar to this:
```
<script type="text/javascript" src="https://unpkg.com/fullpage.js/dist/fullpage.min.js"></script<script>
// fullpage.js initialization
new fullpage('#fullpage', {
scrollingSpeed: 1000,
verticalCentered: false
});
</script>
```
Let's say I want to use the parallax extension for fullpage.js. We have to:
* Replace
`fullpage.min.js` for
as detailed in the fullPage.js documentation. * Copy the content of our extension file (in this case,
```
fullpage.parallax.min.js
```
) and paste it above the script including
. * Use the fullPage.js option our extension requires. In this case
`parallax: true` * Add our license key (don't do this before testing, as once activated it can't be refunded). In this case:
```
parallaxKey: 'OUR KEY'
```
* Add any other options our extension might require. In this case, the parallax extension depends on elements with the class
`fp-bg` . Our `</body>` block should look similar to this now:
```
<script>
[CODE PASTED FROM THE EXTENSION FILE]
</script>
<script type="text/javascript" src="https://unpkg.com/fullpage.js/dist/fullpage.extensions.min.js"></script<script>
// fullpage.js initialization
new fullpage('#fullpage', {
scrollingSpeed: 1000,
verticalCentered: false,
parallax: true,
parallaxKey: 'my_key_here',
// optional
parallaxOptions: {.....},
});
</script>
```
And that's it! Now you have it working!
Most extensions only required a single option to be set to `true` on the fullPage.js initialization, but others might require extra elements or provide many more configurable options. In these cases, you'll find more details on how to use them on their own specific documentation page.
Here's the documentation page for each of those extensions:
# How to Integrate Webflow Editor CMS and Webflow
..If you try to integrate fullPage.js with Webflow Editor CMS, you'll soon realize that the scroll is blocked and there seems to be a compatibility issue between both.
You won't be able to update the collections and use the CMS interface.
You can use this code in order to fix the conflict between the Webflo Editor CMS and fullpage.js:
```
<script>
Webflow.push(function () {
// check is the CMS editor is there
if (Webflow.env('editor') != undefined) {
// Your current fullPage.js initialisation should be here
new fullpage('#fullpage', {
licenseKey: 'YOUR_LICENSE_KEY_HERE'
// more options here
});
}
});
</script>
```
This issue was first discussed on the fullPage.js forum but has also been treated on the Webflow forum:
* https://discourse.webflow.com/t/problems-with-cms-editor-and-fullpagejs/39504/3
* https://discourse.webflow.com/t/prevent-loading-js-in-editor-mode/41916
# I can't install the fullpage.js-master.zip file on WordPress
If you have tried to upload the fullpage.js-master.zip on your WordPress control panel as you do with other WordPress plugins, you'll notice an error message popped up on your screen.
That's because the fullpage.js-master.zip file is a JavaScript component and not a WordPress plugin. You can't install it as if it was a WordPress plugin.
But not to worry! We still have solutions for you. You can install one of our official fullPage WordPress plugins:
* Fullpage.js WordPress plugin for Elementor
* Fullpage.js WordPress plugin for Gutenberg
* Fullpage.js WordPress plugin for Divi (soon!)
Or if you prefer a whole template, you can try this one from Themify.
If you have already bought a fullPage.js Commercial license from the pricing page by mistake because you thought it was for the WordPress plugin, then contact us and we will provide you a full refund for it. Just make sure to first purchase one of the WordPress plugins we provide. (As fullPage.js licenses are non-refundable)
If you are creating your one-page site in Carrd web builder you might want to create fullscreen sections with a snap scroll effect. fullPage.js is definitely the top library to use for this, but... is it possible then to integrate fullPage.js on Carrd.co? And if so, how can I integrate it?
If your Carrd plan allows you to insert custom code, then it is highly probable that you can use fullPage on Carrd. It is as simple as downloading fullPage.js (or loading it directly on your site using npm) and then follow the instructions given on the fullpage.js documentation. Some HTML examples are also available so you can have a reference on how fullPage.js works.
If you find any obstacles or you have any doubts during the process, just ask on the Github issues forum.
If you have chosen Duda professional website builder to develop your own web you might be wondering: can I use fullPage.js on my duda site?
We have not tried it yet, but as long as your duda plan allows you to add custom code, you can easily integrate fullpage.js on your site manually.
How? Just download fullPage.js (or find the code on Github) and then implement it on your site following the fullpage.js documentation.
If you decide to give it a try, remember that we will be glad to listen to all your potential doubts on the Github issues forum.
# Can I integrate fullPage.js and PagePiling?
Can PagePiling.js and fullPage.js be used together? Honestly, they were not designed with that in mind, but it may be possible. All you need to know in order to integrate them is on the fullpage.js documentation and the pagePiling documentation.
However, note this is definitely something not provided out of the box in neither fullPage.js nor pagePiling.js, and support for it is not provided in any form.
It's also worth knowing that you can actually emulate pagePiling.js effect with fullPage.js by using it together with the Parallax extension (using offset percentage 100%)
We created this tutorial to recreate the Tumblr website where we use the exact same effect that pagePiling.js provides but using fullPage.js. You can check this online demo to see the final result.
# Can I integrate fullPage.js and multiScroll.js?
Are multiScroll.js and fullPage.js compatible? Can I combine them on my website?
As it happens with PagePiling.js, multiScroll and fullPage were not designed to be merged. However, that doesn't mean that it is not possible You can try to join them. We provide you with the required documentation to do so: the fullpage.js documentation and the multiScroll documentation.
However, notice that you are on your own in this task. They are not thought to be mingled. That's why support for it is not provided in any form.
# Use fullpage.js with Wow.js animations
fullPage.js by default hides the scrollbar making it impossible for events relying on the scroll event to work as expected.
To fix this all you have to do is enable the scrollbar on fullPage.js by using the option `scrollBar:true` . This way fullPage.js snap effect will still work and you'll be able to enjoy Wow.js animations. If you still prefer to keep the scrollbar hidden, as the animation will be a bit smoother, you can use fullPage.js callbacks such as `onLeave` or `afterLoad` to add the necessary classes Wow.js requires to trigger the animation.
Here's an example:
```
afterLoad: function(origin, destination, direction){
$(destination.item).find('.to-animate').addClass('animated fadeInLeftBig');
},
```
Or if you do not use jQuery:
```
afterLoad: function(origin, destination, direction){
destination.item.querySelectorAll('.to-animate').forEach(function(item){
item.classList.add('animated', 'fadeInLeftBig');
});
},
```
You can find more about this topic on the Github issues forum as well as on Stackoverflow.
# How to make parallax effect work on fullPage.js
If parallax is not working for your fullPage.js site is because the parallax effect, as well as many other plugins which depends on the scrolling of the site, listens to the `scroll` event of JavaScript. fullPage.js doesn't actually "scroll the site" as there's no actual scrollbar. fullPage.js simulates the scrolling of the page by changing the `top` or `translate3d` property of the page. In order to get any third party parallax component working on fullPage.js you'll need to use the fullPage.js option `scrollBar:true` (or `autoScrolling:false` if you are willing to sacrify the snap effect).
For example:
```
new fullpage('#fullpage', {
scrollBar: true
});
```
Additionally, if you want to simplify things and rely on fullPage alone, you can make use of the Parallax Extension available for fullPage.js.
It is designed to create a parallax effect for the whole background of your sections and slides.
* I do not have a credit/debit card. Can I use PayPal to make the payment?
* Use fullPage responsive mode and keep snap scroll effect
* Make an element scrollable inside a fullpage.js section or slide
* Make a section scrollable in fullpage.js using scrollOverflow
* fullPage Suddenly Stopped Working And Page Broke [Solution]
# Use fullPage.js on Weebly
You might have a Weebly site where you would like to use fullPage.js. Is it possible to implement it? Well, the truth is that we have not tried it. We would love to do it, but there are many website builders and our time (probably like yours) is limited. However, you can always try to implement it by yourself.
Remember that you can actually test fullpage.js without any key for free. You'll get an error in the JS console but you can still use it. It is easy to get the fullpage.js files: you can download the zip file or just load the required files from a CDN.
If you decide to implement it on your own, please notice that none of the fullpage.js licenses include email support for questions related to the implementation of fullpage.js on web builders (such as Weebly, Wix, Squarespace, etc.) However, if you have doubts, remember that support for technical questions regarding fullpag.js usage is always guaranteed on the Github issues forum
If you prefer a builder where you can easily create a beautiful scroll snap page, we recommend you fullsnap.io, our official full-page web editor. It's like Weebly, but especially aimed at creating fullPage webs without code and for free!
# Use ScrollMagic with fullPage.js
ScrollMagic and fullpage.js are fully compatible. You can use both of them together by using the fullPage.js option `scrollBar: true`.
ScrollMagic scroll interactions are based on the scroll position of the website. fullPage.js by default disables the scrollbar of the page and therefore the default scroll behaviour, making libraries like ScrollMagic and fullPage.js enter in direct conflict.
However, fullPage.js options allow you to overcome this problem. By just turning `scrollBar` on, fullPage.js will add a scrollbar like any other normal page and will perform the snap behaviour in a different way to achieve the same effect.
Here's an example of how to initialise fullPage.js to make it compatible with ScrollMagic:
```
new fullpage('#fullpage', {
licenseKey: '[YOUR LICENSE HERE]',
scrollBar: true
});
```
Here's an example of usage: |
iteraptor | hex | Erlang |
Iteraptor
===
[iterating-nested-terms-like-i-m-five](#iterating-nested-terms-like-i-m-five)
Iterating Nested Terms Like I’m Five
---
**The tiny elixir library to `iterate`/`map`/`reduce`/`filter` deeply nested structures in Elixir.**
*This package is a sibling of Ruby [`Iteraptor`](https://github.com/am-kantox/iteraptor) gem.*
[tl-dr](#tl-dr)
TL;DR
---
* [source code](https://github.com/am-kantox/elixir-iteraptor);
* [documentation](https://hexdocs.pm/iteraptor/Iteraptor.html).
[intro](#intro)
Intro
---
Iterating both maps and lists in Elixir is charming. One might chain iterators,
map, reduce, filter, select, reject, zip... Everybody having at least eight hours of experience with Elixir has definitely seen (and even maybe written)
something like this:
```
~w|aleksei saverio|
|> Enum.map(& String.capitalize/1)
|> Enum.each(fn capitalized_name ->
IO.puts "Hello, #{capitalized_name}!"
end)
```
That is really handy. The things gets cumbersome when it comes to deeply nested structures, like a map having nested keywords, lists etc. The good example of that would be any configuration file, having nested subsections.
While Elixir provides helpers to update elements deeply inside such a term:
* [`Kernel.get_in/2`](https://hexdocs.pm/elixir/Kernel.html#get_in/2)
* [`Kernel.put_in/{2,3}`](https://hexdocs.pm/elixir/Kernel.html#put_in/2)
* [`Kernel.update_in/{2,3}`](https://hexdocs.pm/elixir/Kernel.html#update_in/2)
* [`Kernel.get_and_update_in/{2,3}`](https://hexdocs.pm/elixir/Kernel.html#get_and_update_in/2)
all the above would work if and only all the parent levels in the structure exist.
The exception would be [`get_in/2`](https://hexdocs.pm/elixir/Kernel.html#get_in/2) which is happily returning `nil` being asked for whatever inexisting.
The amount of questions on Stack Overflow asking “how would I modify a nested structure” forced me to finally create this library. The implementation in Elixir looks a bit more convoluted since everything is immutable and one cannot just traverse a structure down to leaves, modifying whatever needed in-place.
The iteration-wide accumulator is required.
That is probably the only example I met in my life where mutability makes things easier. As a bonus the implementation of `bury/4` to store the value deeply inside a structure, creating the intermediate keys as necessary, was introduced.
It behaves as a [proposed but rejected in ruby core](https://bugs.ruby-lang.org/issues/11747)
`Hash#bury`.
---
So, welcome the library that makes the iteration of any nested map/keyword/list combination almost as easy as the natural Elixir `map` and `each`.
• [**Iteraptor**](https://github.com/am-kantox/elixir-iteraptor)
[features](#features)
Features
---
* [`Iteraptor.each/3`](https://hexdocs.pm/iteraptor/Iteraptor.html#each/3)
to iterate a deeply nested map/list/keyword;
* [`Iteraptor.map/3`](https://hexdocs.pm/iteraptor/Iteraptor.html#map/3)
to map a deeply nested map/list/keyword;
* [`Iteraptor.reduce/4`](https://hexdocs.pm/iteraptor/Iteraptor.html#reduce/4)
to reduce a deeply nested map/list/keyword;
* [`Iteraptor.map_reduce/4`](https://hexdocs.pm/iteraptor/Iteraptor.html#map_reduce/4)
to map and reduce a deeply nested map/list/keyword;
* [`Iteraptor.filter/3`](https://hexdocs.pm/iteraptor/Iteraptor.html#filter/3)
to filter a deeply nested map/list/keyword;
* [`Iteraptor.to_flatmap/2`](https://hexdocs.pm/iteraptor/Iteraptor.html#to_flatmap/2)
to flatten a deeply nested map/list/keyword into flatten map with concatenated keys;
* [`Iteraptor.from_flatmap/3`](https://hexdocs.pm/iteraptor/Iteraptor.html#from_flatmap/3)
to “unveil”/“unflatten” the previously flattened map into nested structure;
* [`use Iteraptor.Iteraptable`](https://hexdocs.pm/iteraptor/Iteraptor.Iteraptable.html)
to automagically implement [`Enumerable`](https://hexdocs.pm/elixir/Enumerable.html) and [`Collectable`](https://hexdocs.pm/elixir/Collectable.html) protocols, as well as
[`Access`](https://hexdocs.pm/elixir/Access.html) behaviour on the structure.
[words-are-cheap-show-me-the-code](#words-are-cheap-show-me-the-code)
Words are cheap, show me the code
---
###
[iterating-mapping-reducing](#iterating-mapping-reducing)
Iterating, Mapping, Reducing
```
# each iex> %{a: %{b: %{c: 42}}} |> Iteraptor.each(&IO.inspect/1, yield: :all)
# {[:a], %{b: %{c: 42}}}
# {[:a, :b], %{c: 42}}
# {[:a, :b, :c], 42}
%{a: %{b: %{c: 42}}}
# map iex> %{a: %{b: %{c: 42}}} |> Iteraptor.map(fn {k, _} -> Enum.join(k) end)
%{a: %{b: %{c: "abc"}}}
iex> %{a: %{b: %{c: 42}}}
...> |> Iteraptor.map(fn
...> {[_], _} = self -> self
...> {[_, _], _} -> "YAY"
...> end, yield: :all)
%{a: %{b: "YAY"}}
# reduce iex> %{a: %{b: %{c: 42}}}
...> |> Iteraptor.reduce([], fn {k, _}, acc ->
...> [Enum.join(k, "_") | acc]
...> end, yield: :all)
...> |> :lists.reverse()
["a", "a_b", "a_b_c"]
# map-reduce iex> %{a: %{b: %{c: 42}}}
...> |> Iteraptor.map_reduce([], fn
...> {k, %{} = v}, acc -> {{k, v}, [Enum.join(k, ".") | acc]}
...> {k, v}, acc -> {{k, v * 2}, [Enum.join(k, ".") <> "=" | acc]}
...> end, yield: :all)
{%{a: %{b: %{c: 42}}}, ["a.b.c=", "a.b", "a"]}
# filter iex> %{a: %{b: 42, e: %{f: 3.14, c: 42}, d: %{c: 42}}, c: 42, d: 3.14}
...> |> Iteraptor.filter(fn {key, _} -> :c in key end, yield: :none)
%{a: %{e: %{c: 42}, d: %{c: 42}}, c: 42}
```
###
[flattening](#flattening)
Flattening
```
iex> %{a: %{b: %{c: 42, d: [nil, 42]}, e: [:f, 42]}}
...> |> Iteraptor.to_flatmap(delimiter: "_")
#⇒ %{"a_b_c" => 42, "a_b_d_0" => nil, "a_b_d_1" => 42, "a_e_0" => :f, "a_e_1" => 42}
iex> %{"a.b.c": 42, "a.b.d.0": nil, "a.b.d.1": 42, "a.e.0": :f, "a.e.1": 42}
...> |> Iteraptor.from_flatmap
#⇒ %{a: %{b: %{c: 42, d: [nil, 42]}, e: [:f, 42]}}
```
###
[extras](#extras)
Extras
```
iex> Iteraptor.Extras.bury([foo: :bar], ~w|a b c d|a, 42)
[a: [b: [c: [d: 42]]], foo: :bar]
```
[in-details](#in-details)
In Details
---
###
[iterating](#iterating)
Iterating
**`Iteraptor.each(term, fun/1, opts)`** — iterates the nested structure, yielding the key and value. The returned from the function value is discarded.
* *function argument:* **`{key, value}`** tuple
* *options*: **`yield: [:all, :maps, :lists, :none]`**, `:none` is the default
* *return value*: **`self`**
###
[mapping-and-reducing](#mapping-and-reducing)
Mapping and Reducing
**`Iteraptor.map(term, fun/1, opts)`** — iterates the nested structure,
yielding the key and value. The value, returned from the block should be either a single value or a `{key, value}` tuple.
* *function argument:* **`{key, value}`** tuple
* *options*: **`yield: [:all, :maps, :lists, :none]`**, `:none` is the default
* *return value*: **`mapped`**
**`Iteraptor.reduce(term, fun/2, opts)`** — iterates the nested structure,
yielding the key and value. The value, returned from the block should be an accumulator value.
* *function arguments:* **`{key, value}, acc`** pair
* *options*: **`yield: [:all, :maps, :lists, :none]`**, `:none` is the default
* *return value*: **`accumulator`**
**`Iteraptor.map_reduce(term, fun/2, opts)`** — iterates the nested structure,
yielding the key and value. The value, returned from the block should be a `{{key, value}, acc}` value. The first element of this tuple is used for mapping, the last—accumulating the result.
* *function arguments:* **`{key, value}, acc`** pair
* *options*: **`yield: [:all, :maps, :lists, :none]`**, `:none` is the default
* *return value*: **`{mapped, accumulator}`** tuple
###
[filtering](#filtering)
Filtering
**`Iteraptor.filter(term, filter/1, opts)`** — filters the structure according to the value returned from each iteration (`true` to leave the element, `false` to discard.)
* *function argument:* **`{key, value}`** tuple
* *options*: **`yield: [:all, :maps, :lists, :none]`**, `:none` is the default
* *return value*: **`filtered`**
###
[flattening-1](#flattening-1)
Flattening
**`Iteraptor.to_flatmap(term, opts)`** — flattens the structure into the flatten map/keyword, concatenating keys with a delimiter.
* *options*: **`delimiter: binary(), into: term()`**,
defaults: `delimiter: ".", into: %{}`
* *return value*: **`flattened`**
**`Iteraptor.from_flatmap(term, fun/1, opts)`** — de-flattens the structure from the flattened map/keyword, splitting keys by a delimiter. An optional transformer function might be called after the value is deflattened.
* *function argument:* **`{key, value}`** tuple
* *options*: **`delimiter: binary(), into: term()`**,
defaults: `delimiter: ".", into: %{}`
* *return value*: **`Map.t | Keyword.t | List.t`**
---
[API Reference](api-reference.html)
Iteraptable protocol
===
The protocol specifying how the respective struct might be used within [`Iteraptor`](Iteraptor.html).
**Experimental.** By implementing this protocol one might change the behaviour of nested objects regarding how they should be iterated through.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[name(term)](#name/1)
Returns a name of the term to be represented in flatmaps
[to_collectable(term)](#to_collectable/1)
Converts a term to a collectable
[to_enumerable(term)](#to_enumerable/1)
Converts a term to an enumerable
[type(term)](#type/1)
Returns a type understood by [`Iteraptable`](Iteraptable.html#content)
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
Iteraptor
===
[`Iteraptor`](Iteraptor.html#content) makes complicated nested structures (currently [`Map`](https://hexdocs.pm/elixir/Map.html)s, [`List`](https://hexdocs.pm/elixir/List.html)s
and [`Keyword`](https://hexdocs.pm/elixir/Keyword.html)s) iteration easier.
[usage](#module-usage)
Usage
---
#### Iterating, Mapping, Reducing
* [`Iteraptor.each/3`](https://hexdocs.pm/iteraptor/Iteraptor.html#each/3)
to iterate a deeply nested map/list/keyword;
* [`Iteraptor.map/3`](https://hexdocs.pm/iteraptor/Iteraptor.html#map/3)
to map a deeply nested map/list/keyword;
* [`Iteraptor.reduce/4`](https://hexdocs.pm/iteraptor/Iteraptor.html#reduce/4)
to reduce a deeply nested map/list/keyword;
* [`Iteraptor.map_reduce/4`](https://hexdocs.pm/iteraptor/Iteraptor.html#map_reduce/4)
to map and reduce a deeply nested map/list/keyword;
#### Flattening
* [`Iteraptor.to_flatmap/2`](https://hexdocs.pm/iteraptor/Iteraptor.html#to_flatmap/2)
to flatten a deeply nested map/list/keyword into flatten map with concatenated keys;
* [`Iteraptor.from_flatmap/3`](https://hexdocs.pm/iteraptor/Iteraptor.html#from_flatmap/3)
to “unveil”/“unflatten” the previously flattened map into nested structure;
#### Filtering
* [`Iteraptor.filter/3`](https://hexdocs.pm/iteraptor/Iteraptor.html#filter/3)
to filter the structure according to the value returned from each iteration
(`true` to leave the element, `false` to discard.)
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[option()](#t:option/0)
[options()](#t:options/0)
[traverse_fun()](#t:traverse_fun/0)
The function that might be passed to all the traversion functions.
[Functions](#functions)
---
[each(input, fun, opts \\ [])](#each/3)
Iterates the given nested structure, calling the callback provided on each
value. The key returned is an array of all the parent keys (and/or indices
in a case of an array.)
[filter(input, fun, opts \\ [])](#filter/3)
Filters the deeply nested term, optionally calling the function on filtered entries.
[from_flatmap(input, transformer \\ & &1, opts \\ [])](#from_flatmap/3)
Build a nested structure out of a flatmap given, decomposing the names of keys and handling lists carefully.
[jsonify(input, opts \\ [])](#jsonify/2)
Produces a term ready-to-use with JSON interchange. Stringifies all keys and converts keywords to maps.
[map(input, fun, opts \\ [])](#map/3)
Maps the given nested structure, calling the callback provided on each value.
The key returned is a concatenated names of all the parent keys
(and/or indices in a case of an array.)
[map_reduce(input, acc \\ %{}, fun, opts \\ [])](#map_reduce/4)
Iteration with mapping and reducing. The function of arity `2`, called back on each
iteration with `{k, v}` pair *and* an accumulator is accepted.
[reduce(input, acc \\ nil, fun, opts \\ [])](#reduce/4)
Iteration with reducing. The function of arity `2`, called back on each
iteration with `{k, v}` pair *and* an accumulator is accepted.
[to_flatmap(input, opts \\ [])](#to_flatmap/2)
Build a flatmap out of nested structure, concatenating the names of keys.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
Iteraptor.AST
===
[`Iteraptor.AST`](Iteraptor.AST.html#content) module traverses AST, allowing `map`, `reduce` and family.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[map(input, fun, opts \\ [])](#map/3)
Mapper for the AST.
[reduce(input, acc, fun, opts \\ [])](#reduce/4)
Reduces the AST with an accumulator.
[Link to this section](#functions)
Functions
===
Iteraptor.Array
===
Array emulation implementing [`Access`](https://hexdocs.pm/elixir/Access.html) behaviour. Index in array is zero-based.
`Array` is the "go to" array data structure in Elixir. An array can be
constructed using `Array.new/{0,1}`:
```
iex> Iteraptor.Array.new()
#Array<[]iex> Iteraptor.Array.new(2)
#Array<[nil, nil]iex> Iteraptor.Array.new([:foo, :bar])
#Array<[:foo, :bar]>
```
An array can contain any kind of elements, and elements in an array don't have
to be of the same type. By definition, arrays have *keys* in `0..size-1` range.
Arrays are implicitly expandable, which means adding an element at index `100`
to the array currently containing 1 element would increase the size of the
array to `100`.
```
iex> array = Iteraptor.Array.new([:foo])
iex> Iteraptor.Array.set(array, 3, :bar)
#Array<[:foo, nil, nil, :bar]>
```
An `Array` is represented internally using the `%Array{}` struct. Note that,
however, the struct fields are private and must not be accessed directly;
use the functions in this module to perform operations on arrays.
`Array`s can also be constructed starting from other collection-type data structures: for example, see `Array.new/1` or [`Enum.into/2`](https://hexdocs.pm/elixir/Enum.html#into/2).
```
iex> Enum.into([1, 2, 3], Iteraptor.Array.new())
#Array<[1, 2, 3]>
```
`Array`s do implement [`Access`](https://hexdocs.pm/elixir/Access.html) behaviour.
```
iex> array = Iteraptor.Array.new([%{foo: 42}, %{bar: :baz}])
iex> get_in(array, [0, :foo])
42
```
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[value()](#t:value/0)
[Functions](#functions)
---
[append(array, other)](#append/2)
Appends another enumerable to the array.
[from_tuple(tuple)](#from_tuple/1)
Converts a tuple given as parameter to `array`.
[get(array, index, default \\ nil)](#get/3)
Returns the `value` at `index` in `array`, or `default` if index is out of array bounds.
[new(enumerable \\ nil, transform \\ nil)](#new/2)
Returns a new array.
[pop(array, index)](#pop/2)
Pops (deletes) `value` at `index` from `array`, setting the value at the
respective index to `nil`.
Returns a tuple containing the value removed and the new array.
[set(array, index, value)](#set/3)
Sets the `value` at `index` in `array`, expanding the array if necessary.
Returns a new array.
[size(array)](#size/1)
Returns the number of elements in `array`.
[to_list(array)](#to_list/1)
Converts `array` to a list.
[trim(array)](#trim/1)
Trims `nil` values from the tail of the `Array`. Returns a trimmed array.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
Iteraptor.Config
===
Extra functions to deal with runtime configuration.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[from_env(app, converter \\ & &1, key \\ :system)](#from_env/3)
Reads the application config and recursively changes all the `{:key, "VAR"}`
tuples with runtime system environment read from "VAR".
[Link to this section](#functions)
Functions
===
Iteraptor.Extras
===
Extra functions to deal with enumerables.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[bury(term, key, value, opts \\ [into: :default])](#bury/4)
Deep store the value into the nested structure. Behaves as a [proposed
but rejected in ruby core](https://bugs.ruby-lang.org/issues/11747)
`Hash#bury`.
[each_cons(list, n \\ 2, acc \\ [])](#each_cons/3)
Behaves as `Enum.each_cons(n)` in ruby. Iterates the input producing the list of cons.
Gracefully stolen from <https://groups.google.com/forum/#!topic/elixir-lang-core/LAK23vaJgvE[Link to this section](#functions)
Functions
===
Iteraptor.Iteraptable
===
`use Iteraptor.Iteraptable` inside structs to make them both
[`Enumerable`](http://elixir-lang.org/docs/stable/elixir/Enumerable.html) and
[`Collectable`](http://elixir-lang.org/docs/stable/elixir/Collectable.html) and implement the [`Access`](https://hexdocs.pm/elixir/Access.html#content) behaviour:
[usage](#module-usage)
Usage
---
Use the module within the struct of your choice and this struct will be automagically granted [`Enumerable`](https://hexdocs.pm/elixir/Enumerable.html) and [`Collectable`](https://hexdocs.pm/elixir/Collectable.html) protocols implementations.
`use Iteraptor.Iteraptable` accepts keyword parameter `skip: Access` or
`skip: [Enumerable, Collectable]` which allows to implement a subset of protocols. Also it accepts keyword parameter `derive: MyProtocol` allowing to specify what protocol(s) implementations should be implicitly derived for this struct.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[__using__(opts \\ [])](#__using__/1)
Allows to enable iterating features on structs with `use Iteraptor.Iteraptable`
[Link to this section](#functions)
Functions
===
Iteraptor.Iteraptable.Unsupported exception
===
Unsupported in applying [`Iteraptor.Iteraptable`](Iteraptor.Iteraptable.html)
Iteraptor.Utils
===
Helper functions to update nested terms
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[deep_put_in(target, key_value, opts \\ [])](#deep_put_in/3)
Safe put the value deeply into the term nesting structure. Creates all the intermediate keys if needed.
[dig!(input, acc \\ [])](#dig!/2)
[dig(input, acc \\ [])](#dig/2)
Digs the leaf value in the nested keyword / map.
[join(input, opts \\ [])](#join/2)
Joins the array of keys into the string using delimiter.
[quacks_as_list(input)](#quacks_as_list/1)
Checks if the map/keyword looks like a normal list.
[split(input, opts \\ [])](#split/2)
Splits the string by delimiter, possibly converting the keys to symbols.
[squeeze(input, opts \\ [])](#squeeze/2)
Squeezes the nested structure merging same keys.
[try_to_list(input)](#try_to_list/1)
Gently tries to create a linked list out of input, returns input if it
cannot be safely converted to the list.
[type(input)](#type/1)
Determines the type of the given term.
[Link to this section](#functions)
Functions
===
Iteraptor.Utils.Unsupported exception
===
An exception to be thrown from banged methods of [`Iteraptor`](Iteraptor.html).
Sooner or later we’ll support everything, that’s why meanwhile
we raise `Unsupported` if something goes wrong. |
GITHUB_edsomjr_TEP.zip_unzipped_bfs_01.pdf | free_programming_book | Unknown | Grafos BFS 0/1 Prof. <NAME>uldade UnB Gama
x Caractersticas da BFS 0/1 x
x Caractersticas da BFS 0/1
? Especializao do algoritmo de Dijkstra x
x Caractersticas da BFS 0/1
? Especializao do algoritmo de Dijkstra
? Aplicvel em grafos cujos pesos das arestas so iguais ou a 0 ou a x x
x Caractersticas da BFS 0/1
? Especializao do algoritmo de Dijkstra
? Aplicvel em grafos cujos pesos das arestas so iguais ou a 0 ou a x
? O nome do algoritmo provm do caso x = 1 x
x Caractersticas da BFS 0/1
? Especializao do algoritmo de Dijkstra
? Aplicvel em grafos cujos pesos das arestas so iguais ou a 0 ou a x
? O nome do algoritmo provm do caso x = 1
? Complexidade: O(V + E)
x
x Fila x fila com prioridades x
x Fila x fila com prioridades
? O algoritmo de Dijkstra usa uma fila com prioridades para identificar o vrtice u mais prximo de s ainda no processado x
x Fila x fila com prioridades
? O algoritmo de Dijkstra usa uma fila com prioridades para identificar o vrtice u mais prximo de s ainda no processado
? Com a restrio dos pesos wi ao conjunto {0, 1}, o relaxamento ter apenas duas opes x
x Fila x fila com prioridades
? O algoritmo de Dijkstra usa uma fila com prioridades para identificar o vrtice u mais prximo de s ainda no processado
? Com a restrio dos pesos wi ao conjunto {0, 1}, o relaxamento ter apenas duas opes
? Se (u, v) tem peso w e dist(s, v) > dist(s, u) + w, ento aps o relaxamento ou dist(s, v) = dist(s, u) ou dist(s, v) = dist(s, u) + 1 x
x Fila x fila com prioridades
? O algoritmo de Dijkstra usa uma fila com prioridades para identificar o vrtice u mais prximo de s ainda no processado
? Com a restrio dos pesos wi ao conjunto {0, 1}, o relaxamento ter apenas duas opes
? Se (u, v) tem peso w e dist(s, v) > dist(s, u) + w, ento aps o relaxamento ou dist(s, v) = dist(s, u) ou dist(s, v) = dist(s, u) + 1
? Deste modo, a fila de prioridades pode ser substituda por uma fila simples x
x Pseudocdigo
x
x Pseudocdigo
Entrada: um grafo G(V, E) cujos pesos wi {0, x} e um vrtice s V Sada: um vetor d tal que d[u] a distncia mnima em G entre s e u x
x Pseudocdigo
Entrada: um grafo G(V, E) cujos pesos wi {0, x} e um vrtice s V Sada: um vetor d tal que d[u] a distncia mnima em G entre s e u 1. Faa d[s] = 0, d[u] = se u 6= s e seja Q = {s} a fila x
x Pseudocdigo
Entrada: um grafo G(V, E) cujos pesos wi {0, x} e um vrtice s V Sada: um vetor d tal que d[u] a distncia mnima em G entre s e u 1. Faa d[s] = 0, d[u] = se u 6= s e seja Q = {s} a fila 2. Enquanto Q 6= :
(a) Seja u o primeiro elemento de Q
(b) Se (u, v) torna d[v] = d[u], insira v na primeira posio de Q
(c) Se (u, v) torna d[v] = d[u] + x, insira v na ltima posio de Q
(d) Remova u de Q x
x Pseudocdigo
Entrada: um grafo G(V, E) cujos pesos wi {0, x} e um vrtice s V Sada: um vetor d tal que d[u] a distncia mnima em G entre s e u 1. Faa d[s] = 0, d[u] = se u 6= s e seja Q = {s} a fila 2. Enquanto Q 6= :
(a) Seja u o primeiro elemento de Q
(b) Se (u, v) torna d[v] = d[u], insira v na primeira posio de Q
(c) Se (u, v) torna d[v] = d[u] + x, insira v na ltima posio de Q
(d) Remova u de Q 3. Retorne d x
x 0
C 0
1 0
B 1
1 A
E 1
0 1
G x
D 1
1 F
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
Q={A}
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
Q={}
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0
Q={B}
x 0
C D
1 0
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0
1
Q = { B, D }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0
1
1
Q = { B, D, F }
x 0
C D
0 1
0 B
1 A
E 0
1 1
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0
1
1 0
Q = { G, B, D, F }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0
1
1 0
Q = { B, D, F }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0
1
1 0
Q = { B, D, F }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0
1
1 0
Q = { D, F }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0 0
1
1 0
Q = { C, D, F }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0 0
1
1 0
Q = { C, D, F }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0 0
1
1 0
Q = { D, F }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0 0
0
1 0
Q = { D, F }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0 0
0
1 0
Q={F}
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0 0
0
1 0
Q={ }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0 0
0 2
1 0
Q={E}
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0 0
0 2
1 0
Q={ }
x 0
C D
0 1
1 0
B A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0 0
0 2
1 0
Q={ }
x 0
C D
0 1
0 B
1 A
1 E
1 0
1 G
dist(u, A)
x 1
1 F
A B
C D
E F
G 0
0 0
0 2
1 0
Q={ }
vector<int> bfs_01(int s, int N) {
vector<int> dist(N + 1, oo);
dist[s] = 0;
deque<int> q;
q.emplace_back(s);
while (not q.empty()) {
auto u = q.front();
q.pop_front();
}
}
for (auto [v, w] : adj[u])
if (dist[v] > dist[u] + w) {
dist[v] = dist[u] + w;
w == 0 ? q.emplace_front(v) : q.emplace_back(v);
}
return dist;
x Problemas sugeridos 1. AtCoder Beginner Contest 176 Problem D: Wizard in Maze 2. Codeforces Round #516 (Div. 1) Problem B: Labyrinth 3. OJ 11573 Ocean Currents 4. SPOJ KATHTHI KATHTHI x
x Referncias
1. Codeforces, O-1 BFS [Tutorial]. himanshujajus blog, acesso em 19/07/2021.
2. CP-Algorithms, 0-1 BFS. Acesso em 19/07/2021.
3. <NAME>; HALIM, Steve. Competitive Programming 3, 2010.
4. <NAME>. Competitive Programmers Handbook, 2018.
5. <NAME>; <NAME>. Programming Challenges, 2003.
x |
github.com/gofiber/jwt/v2 | go | Go | README
[¶](#section-readme)
---
### JSON Web Tokens

[](https://gofiber.io/discord)



Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [func New(config ...Config) fiber.Handler](#New)
* [type Config](#Config)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [New](https://github.com/gofiber/jwt/blob/v2.2.7/jwt.go#L72) [¶](#New)
```
func New(config ...[Config](#Config)) fiber.Handler
```
New ...
### Types [¶](#pkg-types)
####
type [Config](https://github.com/gofiber/jwt/blob/v2.2.7/jwt.go#L19) [¶](#Config)
```
type Config struct {
// Filter defines a function to skip middleware.
// Optional. Default: nil
Filter func(*fiber.Ctx) [bool](/builtin#bool)
// SuccessHandler defines a function which is executed for a valid token.
// Optional. Default: nil
SuccessHandler fiber.Handler
// ErrorHandler defines a function which is executed for an invalid token.
// It may be used to define a custom JWT error.
// Optional. Default: 401 Invalid or expired JWT
ErrorHandler fiber.ErrorHandler
// Signing key to validate token. Used as fallback if SigningKeys has length 0.
// Required. This or SigningKeys.
SigningKey interface{}
// Map of signing keys to validate token with kid field usage.
// Required. This or SigningKey.
SigningKeys map[[string](/builtin#string)]interface{}
// Signing method, used to check token signing method.
// Optional. Default: "HS256".
// Possible values: "HS256", "HS384", "HS512", "ES256", "ES384", "ES512", "RS256", "RS384", "RS512"
SigningMethod [string](/builtin#string)
// Context key to store user information from the token into context.
// Optional. Default: "user".
ContextKey [string](/builtin#string)
// Claims are extendable claims data defining token content.
// Optional. Default value jwt.MapClaims
Claims jwt.Claims
// TokenLookup is a string in the form of "<source>:<name>" that is used
// to extract token from the request.
// Optional. Default value "header:Authorization".
// Possible values:
// - "header:<name>"
// - "query:<name>"
// - "param:<name>"
// - "cookie:<name>"
TokenLookup [string](/builtin#string)
// AuthScheme to be used in the Authorization header.
// Optional. Default: "Bearer".
AuthScheme [string](/builtin#string)
// contains filtered or unexported fields
}
```
Config defines the config for BasicAuth middleware |
ZINARp | cran | R | Package ‘ZINARp’
October 12, 2022
Type Package
Title Simulate INAR/ZINAR(p) Models and Estimate Its Parameters
Version 0.1.0
Maintainer <NAME> <<EMAIL>>
Description Simulation, exploratory data analysis and Bayesian analysis of the p-order Integer-
valued Autoregressive (INAR(p)) and Zero-inflated p-order Integer-valued Autoregressive (ZI-
NAR(p)) processes, as described in Garay et al. (2020) <doi:10.1080/00949655.2020.1754819>.
License GPL (>= 3.0)
Encoding UTF-8
LazyData true
Imports progress, stats, utils, graphics
RoxygenNote 7.1.1
Depends R (>= 2.10)
NeedsCompilation no
Author <NAME> [aut],
<NAME> [aut],
<NAME> [aut, cre]
Repository CRAN
Date/Publication 2022-05-09 11:30:02 UTC
R topics documented:
estimate_zinar... 2
explore_zinar... 3
simul_zinar... 3
slesion... 4
estimate_zinarp Parameter estimation for ZINARp models
Description
This function uses MCMC algorithms (Metropolis-Hastings and Gibbs Sampler) to generate a chain
of INAR/ZINAR(p) parameter estimators.
Usage
estimate_zinarp(
x,
p,
iter = 5000,
thin = 2,
burn = 0.1,
innovation = "Poisson"
)
Arguments
x A vector containing a discrete non-negative time series dataset.
p The order of the INAR/ZINAR process.
iter The number of iterations to be considered. Defaults to 5000.
thin Lag for posterior sample. Defaults to 2.
burn Burn-in for posterior sample. Defaults to 0.1. Must be in (0,1).
innovation Distribution to be used for the innovation : "Poisson" or "ZIP". Defaults to
Poisson.
Value
Returns a list containing a posteriori samples for the specified model parameters.
References
Garay, <NAME>., <NAME>, <NAME>, and <NAME>. "Bayesian analysis of
the p-order integer-valued AR process with zero-inflated Poisson innovations." Journal of Statistical
Computation and Simulation 90, no. 11 (2020): 1943-1964.
Garay, <NAME>., <NAME>, <NAME>, and <NAME>. "First-Order Integer
Valued AR Processes with Zero-Inflated Innovations." In Workshop on Nonstationary Systems and
Their Applications, pp. 19-40. Springer, Cham, 2021.
Examples
test <- simul_zinarp(alpha = 0.1, lambda = 1, n = 100)
e.test <- estimate_zinarp(x = test, p = 1, iter = 800, innovation= "Poisson")
alpha_hat <- mean(e.test$alpha)
lambda_hat <- mean(e.test$lambda)
data(slesions)
e.slesions <- estimate_zinarp(slesions$y, p = 1, iter = 800, innovation = 'ZIP')
alpha_hat_slesions <- mean(e.slesions$alpha)
lambda_hat_slesions <- mean(e.slesions$lambda)
rho_hat_slesions <- mean(e.slesions$rho)
explore_zinarp EXPLORATORY DATA ANALYSIS FOR ZINAR(p) PROCESSES
Description
This function generates a graph for exploring ZINAR(p) processes.
Usage
explore_zinarp(x)
Arguments
x A vector containing a discrete non-negative time series data set.
Value
Plot time series graph, relative frequency bar plot, autocorrelation function graph and partial auto-
correlation function graph on a common plot.
simul_zinarp Sample Generator for ZINAR(p)
Description
This function generates a realization of a ZINAR(p) process.
Usage
simul_zinarp(n, alpha, lambda, pii = 0)
Arguments
n The length of the simulated chain.
alpha The p-dimensional vector (in which p is the process order) of alpha values, the
probabilities of an element remaining in the process.All alpha elements must be
in [0,1] and their sum must be smaller than 1.
lambda The Poisson rate parameter. Must be greater than zero.
pii The probability of a structural zero (i.e., ignoring the Poisson distribution) under
ZIP innovation sequences. Defaults to 0, following a standard Poisson.
Value
Returns a numeric vector representing a realization of an INAR/ZINAR(p) process.
References
<NAME>., <NAME>, <NAME>, and <NAME>. "Bayesian analysis of
the p-order integer-valued AR process with zero-inflated Poisson innovations." Journal of Statistical
Computation and Simulation 90, no. 11 (2020): 1943-1964.
Garay, <NAME>. ; Medina, <NAME>. ; <NAME>. ; <NAME>. First-order integer valued
AR processes with zero-inflated innovations. Cyclostationarity: Theory and Methods, Springer
Verlag - 2021, v. 1, p. 19-40.
slesions Skin lesions dataset
Description
Monthly number of skin lesions-related submissions to animal health laboratories from a region in
New Zealand, obtained from 2003 to 2009.
Usage
slesions
Format
An object of class data.frame with 84 rows and 1 columns.
References
Jazi, <NAME>, <NAME>, and <NAME>. "First-order integer valued AR pro-
cesses with zero inflated Poisson innovations." Journal of Time Series Analysis 33.6 (2012): 954-
963. |
@types/minizlib | npm | JavaScript | [Installation](#installation)
===
> `npm install --save @types/minizlib`
[Summary](#summary)
===
This package contains type definitions for minizlib (<https://github.com/isaacs/minizlib>).
[Details](#details)
===
Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/minizlib>.
[index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/minizlib/index.d.ts)
---
```
/// <reference types="node" /// Import from dependencies import MiniPass = require("minipass");
import zlib = require("zlib");
// Exports only from typings export { constants } from "zlib";
type BrotliMode = "BrotliCompress" | "BrotliDecompress";
type ZlibMode = "Gzip" | "Gunzip" | "Deflate" | "Inflate" | "DeflateRaw" | "InflateRaw" | "Unzip";
interface MiniPassOptions extends Omit<MiniPass.StringOptions, "encoding"> {
encoding?: BufferEncoding | "buffer" | null;
}
interface ZlibBaseOptions extends MiniPassOptions {
flush?: number | undefined;
finishFlush?: number | undefined;
}
declare class ZlibBase extends MiniPass {
readonly ended: boolean;
constructor(opts: ZlibBaseOptions & zlib.BrotliOptions, mode: BrotliMode);
constructor(opts: ZlibBaseOptions & zlib.ZlibOptions, mode: ZlibMode);
close(): void;
reset(): void;
flush(flushFlag?: number): void;
end(chunk: any, cb?: () => void): this;
end(chunk?: any, encoding?: string | null, cb?: () => void): this;
write(chunk: any, cb?: () => void): boolean;
write(chunk?: any, encoding?: string | null, cb?: () => void): boolean;
}
interface ZlibOptions extends ZlibBaseOptions {
level?: number | undefined;
strategy?: number | undefined;
}
declare class Zlib extends ZlibBase {
constructor(opts: ZlibOptions & zlib.ZlibOptions, mode: ZlibMode);
params(level?: number, strategy?: number): void;
}
export class Deflate extends Zlib {
constructor(opts: ZlibOptions & zlib.ZlibOptions);
}
export class Inflate extends Zlib {
constructor(opts: ZlibOptions & zlib.ZlibOptions);
}
export class Gzip extends Zlib {
constructor(opts: ZlibOptions & zlib.ZlibOptions);
}
export class Gunzip extends Zlib {
constructor(opts: ZlibOptions & zlib.ZlibOptions);
}
export class DeflateRaw extends Zlib {
constructor(opts: ZlibOptions & zlib.ZlibOptions);
}
export class InflateRaw extends Zlib {
constructor(opts: ZlibOptions & zlib.ZlibOptions);
}
export class Unzip extends Zlib {
constructor(opts: ZlibOptions & zlib.ZlibOptions);
}
declare class Brotli extends ZlibBase {
constructor(opts: ZlibOptions & zlib.BrotliOptions, mode: BrotliMode);
}
export class BrotliCompress extends Brotli {
constructor(opts: ZlibOptions & zlib.BrotliOptions);
}
export class BrotliDecompress extends Brotli {
constructor(opts: ZlibOptions & zlib.BrotliOptions);
}
```
### [Additional Details](#additional-details)
* Last updated: Wed, 18 Oct 2023 05:47:08 GMT
* Dependencies: [@types/node](https://npmjs.com/package/@types/node), [minipass](https://npmjs.com/package/minipass)
[Credits](#credits)
===
These definitions were written by [<NAME>](https://github.com/nokel81).
Readme
---
### Keywords
none |
pypimonitor | readthedoc | Unknown | PypiMonitor Documentation
Release 0.5.0
<NAME>
Oct 08, 2023
CONTENTS 1.1 pypimonito... 3 1.2 pypimonitor.http... 3 2.1 Processing YAML file... 5 2.2 Writing YAML file... 5 3.1 List of plugin... 9 3.2 Write your ow... 11
i
ii
PypiMonitor Documentation, Release 0.5.0 An HTML dashboard to monitor your PyPI packages. It displays a line charts showing the evolution of downloads across versions, and a set of badges (download statistics, readthedocs badge, continuous integration, etc.). See the example below.
It can only monitor stuff that is not on Pypi, but less information will be available.
It is available as a command line interface that generates the HTML code, and as a web server, to generate and serve this dashboard.
Contents:
PypiMonitor Documentation, Release 0.5.0 2 CONTENTS
CHAPTER
ONE
MODULES 1.1 pypimonitor This plugin can be called to generate an HTML page, whose code is printed to standard output.
Produce an HTML dashboard to monitor your PyPI packages.
usage: pypimonitor [-h] [--version] [-u USER] [-c CELL] [-p PACKAGE] [yaml]
1.1.1 Positional Arguments
yaml Configuration file.
1.1.2 Named Arguments
--version Show version
-u, --user A comma-separated list of users, whose packages are to be monitored.
Default: []
-c, --cell A comma-separated list of cells to show.
Default: []
-p, --package A comma-separated list of packages to monitor.
Default: []
1.2 pypimonitor.httpd Alternatively, this module can sever the web pages, to be accessible from a web browser. If served on http://localhost,
the following URLs are available:
• http://localhost (and http://localhost/index.html): If no GET arguments are given, display an index page, with
a form to ask to render some pages. It also accepts GET arguments to specify which packages to process, and
which cell plugins to use.
• http://localhost/foo/bar: When running the server, a directory DIR is given as argument. When calling this URL,
a file DIR/foo/bar.yaml is searched, and if it exists, this file is processed to render the HTML page.
PypiMonitor Documentation, Release 0.5.0 4 Chapter 1. Modules
CHAPTER
TWO
YAML CONFIGURATION FILES
Some plugins try to guess the appropriate required values, but it is not always possible. For instance, plugin readthe-
docs cannot guess the documentation URL, since it can differ from http://<PypiPackageName>.readthedocs.
io (for instance, the documentation URL of sphinxcontrib-packages is http://packages.readthedocs.io and not http:
//spinxcontrib-packages.readthedocs.io). Thus, it may be necessary to provide additional information. This can be
done using YAML files.
2.1 Processing YAML files
Configuration files can be processed by both pypimonitor command line interface, and by the pypimonitor.httpd
web sever. See the relevant documentation for more information.
2.2 Writing YAML files
2.2.1 Example
The example page is produced using the following YAML file:
1 default:
2 CI:
3 cell: gitlabci
4 server: //framagit.org
5 user: spalax
6 Coverage:
7 cell: gitlabcoverage
8 server: //framagit.org
9 user: spalax 10 cells:
11 - color 12 - homepage 13 - pypiversion 14 - pythonversions 15 - pypimdownloads 16 - pypiwdownloads 17 - pypiddownloads 18 - readthedocs 19 - CI 20 - Coverage
(continues on next page)
PypiMonitor Documentation, Release 0.5.0
(continued from previous page)
21 packages:
22 argdispatch:
23 annales-math:
24 homepage:
25 homepage: //framagit.org/lpaternault/annales-math 26 CI:
27 cell: gitlabci 28 server: //framagit.org 29 user: lpaternault 30 Coverage:
31 cell: gitlabcoverage 32 server: //framagit.org 33 user: lpaternault 34 cahier:
35 chval:
36 clachievements:
37 devoir:
38 dummypdf:
39 evariste:
40 fullcoverage:
41 jouets:
42 mklog:
43 papersize:
44 paste2sms:
45 pdfautonup:
46 pdfimpose:
47 pypimonitor:
48 scal:
49 sphinxcontrib-packages:
50 readthedocs:
51 slug: packages 52 sphinxcontrib-proof:
53 sphinxcontrib-stuffcounter:
54 homepage:
55 homepage: //framagit.org/spalax/sphinxcontrib-stuffcounter 56 spix:
57 squelette:
58 toto2titi:
59 CI:
60 cell: gitlabci 61 server: //framagit.org 62 user: spalax 63 slug: paste2sms 64 Coverage:
65 cell: gitlabcoverage 66 server: //framagit.org 67 user: spalax 68 slug: paste2sms 69 readthedocs:
70 slug: paste2sms
PypiMonitor Documentation, Release 0.5.0 2.2.2 Configuration options The YAML configuration is a dictionary, with the following keys : default, cells, packages. There can be additionnal keys, used by somme cell plugins (at the time I am writing this, no plugin uses this).
In the following example, the YAML configuration file is reffered as a Python dict.
cell option The cell plugin used to render column foo of line mypackage is the plugin having as keyword (by order of precedence):
• value of config[‘packages’][‘mypackage’][‘foo’][‘cell’] (to explicitely set the plugin to use for a single package);
• value of config[‘default’][‘foo’][‘cell’] (to explicitely set the default plugin to use for a whole column);
• foo (at last, the column reference is used as the cell plugin keyword);
• the default error plugin.
packages This is a dictionary of dictionaries: the keys are the pypi package names, and the values are either nothing (if the default values are sufficient to process this package), or a dictionary of cell options: the keys of this “sub-dictionary” are the cell names, and the values are dictionary of cell options.
For instance, in the example:
• package fullcoverage uses only default values, so it is mentionned without options;
• however, package sphinxcontrib-packages has, as a value, {'readthedocs': {'slug': 'packages'}},
which means that options {'slug': 'packages'} is passed to the readthedocs plugin (which means that the
documentation URL is http://packages.readthedocs.io instead of http://sphinxcontrib-packages.readthedocs.io).
cells If config[‘cells’] is not defined, the list of columns is deduced from the cells used in the package options. This option has two purposes:
• explicitely set the list of columns (for instance, in the example, since color is never referenced in package options
(every package use the default options for this plugin), it would not appear in the generated HTML file if it were
not present in the config[‘cells’] list);
• set the order of those columns.
The values of this list can be:
• a cell plugin keyword, in which case, unless otherwise specified, the plugin used to render this cell is the
corresponding plugin, and the title of the column is the title of this plugin;
• or an arbitrary text, in which case each package has to explicitely define its cell plugin for this cell, or the cell
plugin to use has to be defined in the default value (see default).
PypiMonitor Documentation, Release 0.5.0 default
Default cell parameters can be set, and apply to every package (unless a different parameter is set specifically for this program). This option is a dictonary, where:
• keys are the column names (as referenced in the cells option);
• values are dictionary of options, which are applied to every package, unless the package explicitely specified a
different option.
For instance, in the example the default configuration contains:
CI:
cell: gitlabci
server: http://framagit.org
user: spalax This means that, unless a package specifies something else:
• the plugin used to render the cells is gitlabci;
• the gitlab server is http://framagit.org (and not the plugin default http://gitlab.com);
• the user is spalax.
CHAPTER
THREE
CELL PLUGINS The content of cells is rendered by plugins. A predefined list is detailed below, but you can also Write your own.
3.1 List of plugins 3.1.1 Base plugins color
Produce a square of the same color of the corresponding download chart line. Takes no arguments.
empty Produce an empty cell. Takes no arguments.
error Produce a text corresponding to an error (this is used internally). Takes no arguments.
html Copy raw html code.
• html (required) The raw html code to render.
link Produce a link to an URL.
• href (required) URL of the ressource to link to.
• content (defaults to the href argument) The text of the link.
PypiMonitor Documentation, Release 0.5.0 3.1.2 Gitlab gitlabci
Produce a badge for latest gitlabCI build.
• server (default http://gitlab.com) URL of the gitlab server.
• user (required) Name of the user owning the package.
• slug Repository name, if different from the pypi package name.
Those options, combined, should produce the package URL: {server}/{user}/{slug}.
gitlabcicoverage Produce a test coverage badge for latest gitlabCI build.
• server (default http://gitlab.com) URL of the gitlab server.
• user (required) Name of the user owning the package.
• slug Repository name, if different from the pypi package name.
Those options, combined, should produce the package URL: {server}/{user}/{slug}.
3.1.3 PyPI homepage
Package home page, retrieved from Pypi metadata.
• homepage (default to pypi home page value) Project home page, if different from the value retrieved from pypi.
3.1.4 Readthedocs readthedocs
Readthedocs build badge.
• slug Repository name, if different from the pypi package name.
• branch Branch name, if the default branch should not be used.
• lang Documentation language, if the default lang should not be used.
3.1.5 Shields pypiddownloads
Badge displaying Pypi daily download statistics.
PypiMonitor Documentation, Release 0.5.0 pypiwdownloads
Badge displaying Pypi weekly download statistics.
pypimdownloads Badge displaying Pypi monthly download statistics.
pypiversion Badge displaying Pypi version.
pythonversions Badge displaying supported Python versions.
3.1.6 Travis travisci
Travis badge.
• user (required) Travis username.
• slug Repository name, if different from the pypi package name.
3.2 Write your own The HTML page is generated using the Jinja2 template system, but you don’t have to use it for your plugins.
A plugin is an object, subclass of Cell. The Cell.render() method must be implemented: it is the method that is called to fill a cell. Since for many usage, your plugin will simply be a template, you can use the Jinja2 class, which makes this a lot easier. Both classes makes it easy to define default and required arguments, and to log errors.
3.2.1 Raw class pypimonitor.cell.Cell(renderer)
Render some piece of information about a package as HTML code.
keyword = None
Keyword referencing the plugin, used in the YAML configuration files file to enable this plugin. If None,
the class is an abstract class that cannot be used directly.
title = ''
Title of the column.
default = {}
Default values for package arguments. See Default and Required arguments for more details.
PypiMonitor Documentation, Release 0.5.0
required = []
List of names of required package arguments. See Default and Required arguments for more details.
render(context, package, cell)
Return the HTML code corresponding to this cell.
Parameters
• context – Current Jinja2 context.
• package (str) – Package name.
• cell (dict) – Package arguments for this cell.
Return type
str
Returns
The HTML code to display in the given cell.
static render_error(context, cell, package, message)
Return the HTML code corresponding to an error.
Parameters
• context – Current Jinja2 context.
• cell (str) – Cell name (plugin keyword).
• package (str) – Package name.
• message (str) – Human readable error message.
Return type
str
Returns
The HTML code to display in the given cell.
3.2.2 Jinja2 class pypimonitor.cell.Jinja2(renderer)
Generic class for cells that are barely more than a template.
When this class is used to render a cell, it renders template self.keyword. When doing so, the template has
access to the following variables:
• package: the name of the package being processed, as a string;
• pypi: the information about this package got from pypi, as a dictionary (for instance https://pypi.org/pypi/
pypimonitor/json);
• cell: the cell options (as defined in the YAML configuration files configuration, maybe completed with
default values) as a dictionary.
property template
Return template path.
By default, this is cells/KEYWORD.html. One can redefine this class to provide alternative template path.
keyword = None
Keyword referencing the plugin, used in the YAML configuration files file to enable this plugin. If None,
the class is an abstract class that cannot be used directly.
PypiMonitor Documentation, Release 0.5.0 3.2.3 Default and Required arguments A built-in feature eases processing default and required arguments, for simple cases. This is automatically done before calling the Cell.render() method.
If your plugin has absolute default values for some arguments, those can be set in the Cell.default dictionary. Keys are arguments, and values are default values for these arguments.
If your plugin has required arguments (plugin cannot work without those arguments), they can be listed in the Cell.
required list. Using a cell without setting one of those arguments will display an error.
For more complex cases (either foo or bar should be set; foo default value is bar if baz is not set, None otherwise; foo is required only if bar is not set; etc.), you have to implement it by hand in the Cell.render() method.
3.2.4 Errors To display an error (both in the generated HTML page, and in the log), the Cell class has a Cell.render_error()
method. This is to be used as:
class Foo(Cell):
keyword = 'foo'
def render(self, context, package, cell):
if not 'bar' in cell:
return self.render_error(context, self.keyword, package, "Error: argument
˓→'bar' missing.")
return "<p>" + cell['bar'] + "</p>"
PypiMonitor Documentation, Release 0.5.0 14 Chapter 3. Cell plugins
CHAPTER
FOUR
EXAMPLE The example configuration file produces the following output (click to enlarge).
PypiMonitor Documentation, Release 0.5.0 16 Chapter 4. Example
CHAPTER
FIVE
INDICES AND TABLES
• genindex
• modindex
• search
17
PypiMonitor Documentation, Release 0.5.0 18 Chapter 5. Indices and tables
PYTHON MODULE INDEX p
pypimonitor, 3 pypimonitor.cell, 9 pypimonitor.httpd, 3
19
PypiMonitor Documentation, Release 0.5.0 20 Python Module Index
INDEX C
Cell (class in pypimonitor.cell), 11 D
default (pypimonitor.cell.Cell attribute), 11 J
Jinja2 (class in pypimonitor.cell), 12 K
keyword (pypimonitor.cell.Cell attribute), 11 keyword (pypimonitor.cell.Jinja2 attribute), 12 M
module
pypimonitor, 3
pypimonitor.cell, 8
pypimonitor.httpd, 3 P
pypimonitor
module, 3 pypimonitor.cell
module, 8 pypimonitor.httpd
module, 3 R
render() (pypimonitor.cell.Cell method), 12 render_error() (pypimonitor.cell.Cell static method),
12 required (pypimonitor.cell.Cell attribute), 11 T
template (pypimonitor.cell.Jinja2 property), 12 title (pypimonitor.cell.Cell attribute), 11 |
dssim-core | rust | Rust | Crate dssim_core
===
The library interface is awfully abstract, because it strives to efficiently, and very accurately,
support several pixel types. It also allows replacing some parts of the algorithm with different implementations
(if you need higher accuracy or higher speed).
Structs
---
* DssimConfiguration for the comparison
* DssimImageAbstract wrapper for images. See `Dssim::create_image()`
* LABL*a*b*b, but using float units (values are 100× smaller than in usual integer representation)
* SsimMapDetailed comparison result
* ValResult of comparison as `f64`
Traits
---
* ToLABBitmapConvert image to L*a*b* planar
* ToRGBAPLURGBA Premultiplied Linear-light Unit scale
Functions
---
* newCreate new context for a comparison
Type Definitions
---
* RGBAPLURGBA, but: premultiplied alpha, linear (using sRGB primaries, but not its gamma curve), f32 unit scale 0..1
* RGBLURGB, but: linear (using sRGB primaries, but not its gamma curve), f32 unit scale 0..1
Crate dssim_core
===
The library interface is awfully abstract, because it strives to efficiently, and very accurately,
support several pixel types. It also allows replacing some parts of the algorithm with different implementations
(if you need higher accuracy or higher speed).
Structs
---
* DssimConfiguration for the comparison
* DssimImageAbstract wrapper for images. See `Dssim::create_image()`
* LABL*a*b*b, but using float units (values are 100× smaller than in usual integer representation)
* SsimMapDetailed comparison result
* ValResult of comparison as `f64`
Traits
---
* ToLABBitmapConvert image to L*a*b* planar
* ToRGBAPLURGBA Premultiplied Linear-light Unit scale
Functions
---
* newCreate new context for a comparison
Type Definitions
---
* RGBAPLURGBA, but: premultiplied alpha, linear (using sRGB primaries, but not its gamma curve), f32 unit scale 0..1
* RGBLURGB, but: linear (using sRGB primaries, but not its gamma curve), f32 unit scale 0..1
Struct dssim_core::Dssim
===
```
pub struct Dssim { /* private fields */ }
```
Configuration for the comparison
Implementations
---
### impl Dssim
#### pub fn new() -> Dssim
Create new context for comparisons
#### pub fn set_scales(&mut self, scales: &[f64])
Set how many scales will be used, and weights of each scale
#### pub fn set_save_ssim_maps(&mut self, num_scales: u8)
Set how many scales will be kept for saving
#### pub fn create_image_rgba(
&self,
bitmap: &[RGBA<u8>],
width: usize,
height: usize
) -> Option<DssimImage<f32>Create image from an array of RGBA pixels (sRGB, non-premultiplied, alpha last).
If you have a slice of `u8`, then see `rgb` crate’s `as_rgba()`.
#### pub fn create_image_rgb(
&self,
bitmap: &[RGB<u8>],
width: usize,
height: usize
) -> Option<DssimImage<f32>Create image from an array of packed RGB pixels (sRGB).
If you have a slice of `u8`, then see `rgb` crate’s `as_rgb()`.
#### pub fn create_image<InBitmap, OutBitmap>(
&self,
src_img: &InBitmap
) -> Option<DssimImage<f32>>where
InBitmap: ToLABBitmap + Send + Sync + Downsample<Output = OutBitmap>,
OutBitmap: ToLABBitmap + Send + Sync + Downsample<Output = OutBitmap>,
The input image is defined using the `imgref` crate, and the pixel type can be:
* `ImgVec<RGBAPLU>` — RGBA premultiplied alpha, linear, float scaled to 0..1
* `ImgVec<RGBLU>` — RGBA linear, float scaled to 0..1
* `ImgVec<f32>` — linear light grayscale, float scaled to 0..1
And there’s `ToRGBAPLU::to_rgbaplu()` trait to convert the input pixels from
`[RGBA<u8>]`, `[RGBA<u16>]`, `[RGB<u8>]`, or `RGB<u16>`. See `lib.rs` for example how it’s done.
You can implement `ToLABBitmap` and `Downsample` traits on your own image type.
#### pub fn compare<M: Borrow<DssimImage<f32>>>(
&self,
original_image: &DssimImage<f32>,
modified_image: M
) -> (Val, Vec<SsimMap>)
Compare original with another image. See `create_image`
The `SsimMap`s are returned only if you’ve enabled them first.
`Val` is a fancy wrapper for `f64`
Trait Implementations
---
### impl Clone for Dssim
#### fn clone(&self) -> Dssim
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for Dssim
### impl Send for Dssim
### impl Sync for Dssim
### impl Unpin for Dssim
### impl UnwindSafe for Dssim
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Pointable for T
#### const ALIGN: usize = mem::align_of::<T>()
The alignment of pointer.#### type Init = T
The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize
Initializes a with the given initializer.
Dereferences the given pointer.
Mutably dereferences the given pointer.
Drops the object pointed to by the given pointer.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct dssim_core::DssimImage
===
```
pub struct DssimImage<T> { /* private fields */ }
```
Abstract wrapper for images. See `Dssim::create_image()`
Implementations
---
### impl<T> DssimImage<T#### pub fn width(&self) -> usize
#### pub fn height(&self) -> usize
Trait Implementations
---
### impl<T: Clone> Clone for DssimImage<T#### fn clone(&self) -> DssimImage<TReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read moreAuto Trait Implementations
---
### impl<T> RefUnwindSafe for DssimImage<T>where
T: RefUnwindSafe,
### impl<T> Send for DssimImage<T>where
T: Send,
### impl<T> Sync for DssimImage<T>where
T: Sync,
### impl<T> Unpin for DssimImage<T>where
T: Unpin,
### impl<T> UnwindSafe for DssimImage<T>where
T: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Pointable for T
#### const ALIGN: usize = mem::align_of::<T>()
The alignment of pointer.#### type Init = T
The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize
Initializes a with the given initializer.
Dereferences the given pointer.
Mutably dereferences the given pointer.
Drops the object pointed to by the given pointer.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct dssim_core::LAB
===
```
pub struct LAB {
pub l: f32,
pub a: f32,
pub b: f32,
}
```
L*a*b*b, but using float units (values are 100× smaller than in usual integer representation)
Fields
---
`l: f32``a: f32``b: f32`Trait Implementations
---
### impl Add<LAB> for LAB
#### type Output = LAB
The resulting type after applying the `+` operator.#### fn add(self, other: Self::Output) -> Self::Output
Performs the `+` operation.
#### type Output = LAB
The resulting type after applying the `+` operator.#### fn add(self, other: f32) -> Self::Output
Performs the `+` operation.
#### fn clone(&self) -> LAB
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### type Output = LAB
The resulting type after applying the `/` operator.#### fn div(self, other: Self::Output) -> Self::Output
Performs the `/` operation.
#### fn from(other: LAB) -> f32
Converts to this type from the input type.### impl From<LAB> for f64
#### fn from(other: LAB) -> f64
Converts to this type from the input type.### impl Mul<LAB> for LAB
#### type Output = LAB
The resulting type after applying the `*` operator.#### fn mul(self, other: LAB) -> Self::Output
Performs the `*` operation.
#### type Output = LAB
The resulting type after applying the `*` operator.#### fn mul(self, other: LAB) -> Self::Output
Performs the `*` operation.
#### type Output = LAB
The resulting type after applying the `*` operator.#### fn mul(self, other: f32) -> Self::Output
Performs the `*` operation.
#### type Output = LAB
The resulting type after applying the `-` operator.#### fn sub(self, other: LAB) -> Self::Output
Performs the `-` operation.
Auto Trait Implementations
---
### impl RefUnwindSafe for LAB
### impl Send for LAB
### impl Sync for LAB
### impl Unpin for LAB
### impl UnwindSafe for LAB
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Pointable for T
#### const ALIGN: usize = mem::align_of::<T>()
The alignment of pointer.#### type Init = T
The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize
Initializes a with the given initializer.
Dereferences the given pointer.
Mutably dereferences the given pointer.
Drops the object pointed to by the given pointer.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct dssim_core::SsimMap
===
```
pub struct SsimMap {
pub map: ImgVec<f32>,
pub ssim: f64,
}
```
Detailed comparison result
Fields
---
`map: ImgVec<f32>`SSIM scores
`ssim: f64`Average SSIM (not DSSIM)
Trait Implementations
---
### impl Clone for SsimMap
#### fn clone(&self) -> SsimMap
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for SsimMap
### impl Send for SsimMap
### impl Sync for SsimMap
### impl Unpin for SsimMap
### impl UnwindSafe for SsimMap
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Pointable for T
#### const ALIGN: usize = mem::align_of::<T>()
The alignment of pointer.#### type Init = T
The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize
Initializes a with the given initializer.
Dereferences the given pointer.
Mutably dereferences the given pointer.
Drops the object pointed to by the given pointer.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct dssim_core::Val
===
```
pub struct Val(_);
```
Result of comparison as `f64`
Implementations
---
### impl Dssim
#### pub fn new(v: f64) -> Dssim
Trait Implementations
---
### impl Add<Dssim> for f64
#### type Output = f64
The resulting type after applying the `+` operator.#### fn add(self, r: Dssim) -> Self::Output
Performs the `+` operation.
#### type Output = f64
The resulting type after applying the `+` operator.#### fn add(self, r: RHS) -> Self::Output
Performs the `+` operation.
#### fn clone(&self) -> Dssim
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### type Output = f64
The resulting type after applying the `/` operator.#### fn div(self, r: Dssim) -> Self::Output
Performs the `/` operation.
#### type Output = f64
The resulting type after applying the `/` operator.#### fn div(self, r: RHS) -> Self::Output
Performs the `/` operation.
#### fn from(s: Dssim) -> f64
Converts to this type from the input type.### impl From<f64> for Dssim
#### fn from(s: f64) -> Dssim
Converts to this type from the input type.### impl Mul<Dssim> for f64
#### type Output = Dssim
The resulting type after applying the `*` operator.#### fn mul(self, r: Dssim) -> Self::Output
Performs the `*` operation.
#### type Output = Dssim
The resulting type after applying the `*` operator.#### fn mul(self, r: RHS) -> Self::Output
Performs the `*` operation.
#### fn eq(&self, other: &Dssim) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Dssim> for f64
#### fn eq(&self, other: &Dssim) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f64> for Dssim
#### fn eq(&self, other: &f64) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Dssim> for Dssim
#### fn partial_cmp(&self, other: &Dssim) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn partial_cmp(&self, other: &Dssim) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists.
This method tests less than (for `self` and `other`) and is used by the `<` operator.
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator.
This method tests greater than (for `self` and `other`) and is used by the `>` operator.
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn partial_cmp(&self, other: &f64) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists.
This method tests less than (for `self` and `other`) and is used by the `<` operator.
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator.
This method tests greater than (for `self` and `other`) and is used by the `>` operator.
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### type Output = f64
The resulting type after applying the `-` operator.#### fn sub(self, r: Dssim) -> Self::Output
Performs the `-` operation.
#### type Output = f64
The resulting type after applying the `-` operator.#### fn sub(self, r: RHS) -> Self::Output
Performs the `-` operation.
### impl Eq for Dssim
### impl StructuralPartialEq for Dssim
Auto Trait Implementations
---
### impl RefUnwindSafe for Dssim
### impl Send for Dssim
### impl Sync for Dssim
### impl Unpin for Dssim
### impl UnwindSafe for Dssim
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Pointable for T
#### const ALIGN: usize = mem::align_of::<T>()
The alignment of pointer.#### type Init = T
The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize
Initializes a with the given initializer.
Dereferences the given pointer.
Mutably dereferences the given pointer.
Drops the object pointed to by the given pointer.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Trait dssim_core::ToLABBitmap
===
```
pub trait ToLABBitmap {
// Required method
fn to_lab(&self) -> Vec<ImgVec<f32>>;
}
```
Convert image to L*a*b* planar
It should return 1 (gray) or 3 (color) planes.
Required Methods
---
#### fn to_lab(&self) -> Vec<ImgVec<f32>Implementations on Foreign Types
---
### impl ToLABBitmap for ImgVec<f32#### fn to_lab(&self) -> Vec<ImgVec<f32>### impl ToLABBitmap for ImgVec<RGBLU#### fn to_lab(&self) -> Vec<ImgVec<f32>### impl ToLABBitmap for ImgVec<RGBAPLU#### fn to_lab(&self) -> Vec<ImgVec<f32>### impl<'a> ToLABBitmap for ImgRef<'a, RGBAPLU#### fn to_lab(&self) -> Vec<ImgVec<f32>### impl<'a> ToLABBitmap for ImgRef<'a, RGBLU#### fn to_lab(&self) -> Vec<ImgVec<f32>Implementors
---
Trait dssim_core::ToRGBAPLU
===
```
pub trait ToRGBAPLU {
// Required methods
fn to_rgbaplu(&self) -> Vec<RGBAPLU>;
fn to_rgblu(&self) -> Vec<RGBLU>;
}
```
RGBA Premultiplied Linear-light Unit scale
Convenience function `.to_rgbaplu()` to convert RGBA bitmaps to a format useful for DSSIM.
Required Methods
---
#### fn to_rgbaplu(&self) -> Vec<RGBAPLUConvert with alpha channel preserved
#### fn to_rgblu(&self) -> Vec<RGBLUDiscard alpha channel, if any
Implementations on Foreign Types
---
### impl<P> ToRGBAPLU for [P]where
P: GammaPixel<Output = RGBAPLU>,
#### fn to_rgbaplu(&self) -> Vec<RGBAPLU#### fn to_rgblu(&self) -> Vec<RGBLUImplementors
---
Function dssim_core::new
===
```
pub fn new() -> Dssim
```
Create new context for a comparison
Type Definition dssim_core::RGBAPLU
===
```
pub type RGBAPLU = RGBA<f32>;
```
RGBA, but: premultiplied alpha, linear (using sRGB primaries, but not its gamma curve), f32 unit scale 0..1
Type Definition dssim_core::RGBLU
===
```
pub type RGBLU = RGB<f32>;
```
RGB, but: linear (using sRGB primaries, but not its gamma curve), f32 unit scale 0..1 |
showimage | cran | R | Package ‘showimage’
October 14, 2022
Title Show an Image on an 'R' Graphics Device
Version 1.0.0
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Sometimes it is handy to be able to view an image file on an
'R' graphics device. This package just does that. Currently it supports
'PNG' files.
License GPL-2 | GPL-3
LazyData true
URL https://github.com/r-lib/showimage#readme
BugReports https://github.com/r-lib/showimage/issues
Imports png, tools
Suggests covr, testthat
RoxygenNote 5.0.1
Encoding UTF-8
NeedsCompilation no
Repository CRAN
Date/Publication 2018-01-24 09:51:26 UTC
R topics documented:
show_imag... 2
show_image Show an Image on an R Graphics Device
Description
Show an Image on an R Graphics Device
Usage
show_image(file, mar = c(0, 0, 0, 0), axes = FALSE, frame.plot = TRUE,
asp = NULL, ...)
Arguments
file Name of the image file to show.
mar Margin, the mar parameter, see par.
axes Whether to show the axes. You need to increase the margin to see the axis labels.
frame.plot Whether to draw a frame around the plot.
asp Aspect ratio parameter for plot. If NULL, then the original aspect ratio of the
image is used.
... Additonal arguments are passed to plot.
Value
Nothing.
Examples
rlogo <- system.file("img", "Rlogo.png", package="png")
show_image(rlogo)
## Create a plot in a PNG and show it
png(tmp <- tempfile(fileext = ".png"))
pairs(iris)
dev.off()
show_image(tmp) |
barista.run | go | Go | README
[¶](#section-readme)
---

### Barista
[](https://github.com/soumya92/barista/releases/tag/autorelease)
[](https://godoc.org/barista.run)
[](https://codeclimate.com/github/soumya92/barista/maintainability)
[](https://codeclimate.com/github/soumya92/barista/test_coverage)
Barista is an i3 status bar written in golang.
**This is not an official Google product**
#### Features
* Based on push rather than fixed interval polling. This allows immediate updates for many modules, like volume, media, shell, etc.
* Produces a single binary via go build. This makes it easy to set up the bar executable, since no import paths, environment variables, et al. need to be configured.
* Good click handlers (especially media and volume), since we can wait for a command and update the bar immediately rather than waiting for the next 'tick'.
* Configuration is code, providing oodles of customization options without needing myriad configuration options in a file somewhere. If/then/else, loops,
functions, variables, and even other go packages can all be used seamlessly.
#### Usage
See samples/sample-bar.go for a sample bar.
To build your own bar, simply create a `package main` go file,
import and configure the modules you wish to use, and call `barista.Run()`.
To show your bar in i3, set the `status_command` of a `bar { ... }` section to be the newly built bar binary, e.g.
```
bar {
position top
status_command exec ~/bin/mybar
font pango:DejaVu Sans Mono 10
}
```
See the [quickstart](https://barista.run/#quickstart) for more details.
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package barista provides the building blocks for a custom i3 status bar.
### Index [¶](#pkg-index)
* [func Add(module bar.Module)](#Add)
* [func DefaultErrorHandler(e bar.ErrorEvent)](#DefaultErrorHandler)
* [func Run(modules ...bar.Module) error](#Run)
* [func SetErrorHandler(handler func(bar.ErrorEvent))](#SetErrorHandler)
* [func SuppressSignals(suppressSignals bool)](#SuppressSignals)
* [func TestMode(reader io.Reader, writer io.Writer)](#TestMode)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [Add](https://github.com/soumya92/barista/blob/d5b04d179ffa/barista.go#L125) [¶](#Add)
```
func Add(module [bar](/[email protected]/bar).[Module](/[email protected]/bar#Module))
```
Add adds a module to the bar.
####
func [DefaultErrorHandler](https://github.com/soumya92/barista/blob/d5b04d179ffa/barista.go#L248) [¶](#DefaultErrorHandler)
```
func DefaultErrorHandler(e [bar](/[email protected]/bar).[ErrorEvent](/[email protected]/bar#ErrorEvent))
```
DefaultErrorHandler invokes i3-nagbar to show the full error message.
####
func [Run](https://github.com/soumya92/barista/blob/d5b04d179ffa/barista.go#L160) [¶](#Run)
```
func Run(modules ...[bar](/[email protected]/bar).[Module](/[email protected]/bar#Module)) [error](/builtin#error)
```
Run sets up all the streams and enters the main loop.
If any modules are provided, they are added to the bar now.
This allows both styles of bar construction:
`bar.Add(a); bar.Add(b); bar.Run()`, and `bar.Run(a, b)`.
####
func [SetErrorHandler](https://github.com/soumya92/barista/blob/d5b04d179ffa/barista.go#L149) [¶](#SetErrorHandler)
```
func SetErrorHandler(handler func([bar](/[email protected]/bar).[ErrorEvent](/[email protected]/bar#ErrorEvent)))
```
SetErrorHandler sets the function to be called when an error segment is right clicked. This replaces the DefaultErrorHandler.
####
func [SuppressSignals](https://github.com/soumya92/barista/blob/d5b04d179ffa/barista.go#L137) [¶](#SuppressSignals)
```
func SuppressSignals(suppressSignals [bool](/builtin#bool))
```
SuppressSignals instructs the bar to skip the pause/resume signal handling.
Must be called before Run.
####
func [TestMode](https://github.com/soumya92/barista/blob/d5b04d179ffa/barista.go#L424) [¶](#TestMode)
```
func TestMode(reader [io](/io).[Reader](/io#Reader), writer [io](/io).[Writer](/io#Writer))
```
TestMode creates a new instance of the bar for testing purposes,
and runs it on the provided streams instead of stdin/stdout.
### Types [¶](#pkg-types)
This section is empty. |
www_uni-trier_de_universitaet_wichtige-anlaufstellen_zimk_downloads-broschueren_programmierung_java | free_programming_book | Unknown | #
Date: 2022-06-07
Categories:
Tags:
Dieses Manuskript basiert auf der Begleitlektüre zum Java - Einführungskurs, den das Zentrum für Informations-, Medien- und Kommunikationstechnologie an der Universität Trier (ZIMK) im Wintersemester 2021/2022 angeboten hat, ist aber auch für das Selbststudium geeignet.
Die von der Firma Sun Microsystems (mittlerweile von der Firma Oracle übernommen) entwickelte Programmiersprache Java ist zwar mit dem Internet groß geworden, hat sich jedoch mittlerweile als universelle, für vielfältige Zwecke einsetzbare Lösung etabliert, die als de-facto – Standard für die plattformunabhängige Entwicklung gelten kann. Unter den objektorientierten Programmiersprachen hat Java wohl den größten Verbreitungsgrad, und das objektorientierte Paradigma der Softwareentwicklung hat sich praktisch in der gesamten Branche durchgesetzt.
* den Text als PDF-Dokument (Stand: 07.06.2022; 971 Seiten; ca. 15 MB)
* ein ZIP-Archiv mit Beispielen und Lösungsvorschlägen zu den Übungsaufgaben (für IntelliJ IDEA; ca. 11 MB)
* die ZIMK-Manuskripte zu älteren Java-Versionen
Die aktuelle Version dieses Manuskripts (inklusive Datenbankprogrammierung mit JDBC und JPA) wird hier angeboten.
# Java 13
Date: 2021-04-24
Categories:
Tags:
Über diese Seite sind die ZIMK-Manuskripte zu älteren Java-Versionen verfügbar.
## Java 13
* den Text als PDF-Dokument (Stand: 24.04.2021; 828 Seiten; ca. 12 MB)
* ein ZIP-Archiv mit Beispielen und Lösungsvorschlägen zu den Übungsaufgaben (für IntelliJ IDEA; ca. 2 MB)
##
Java 9
Hier finden Sie ...
* den Text als PDF-Dokument (Stand: 27.09.2019; 746 Seiten; ca. 11 MB)
* ein ZIP-Archiv mit Beispielen und Lösungsvorschlägen zu den Übungsaufgaben (für IntelliJ IDEA; ca. 2 MB)
## Java 8
* den Text als PDF-Dokument (Stand: 30.03.2016; 805 Seiten; ca. 16 MB)
* ein ZIP-Archiv mit Beispielen und Lösungsvorschlägen zu den Übungsaufgaben (für Eclipse 4.5; ca. 1,2 MB)
## Java 7 (1.7)
* den Text als PDF-Dokument (Stand: 24.09.2012; 735 Seiten; ca. 16 MB)
* ein ZIP-Archiv mit Beispielen und Lösungsvorschlägen zu den Übungsaufgaben (für Eclipse 3.7; ca. 2,1 MB)
## Java 6 (1.6)
* den Text als PDF-Dokument (Stand: 06.04.2010; 573 Seiten; ca. 9,4 MB)
* ein ZIP-Archiv mit Beispielen und Lösungsvorschlägen zu den Übungsaufgaben (für Eclipse 3.5; ca. 2,6 MB)
## Java 5 (1.5)
* den Text als PDF-Dokument (Stand: 06.12.2007; 434 Seiten; ca. 5,1 MB )
* ein ZIP-Archiv mit Beispielen und Übungsaufgaben für Eclipse 3.1 (ca. 1,2 MB)
* ein ZIP-Archiv mit Beispielen und Übungsaufgaben für JCreator LE 3.1 (ca. 1,3 MB)
## Java 1.4
* den Text als PDF-Dokument (Stand: 25.02.2004; 311 Seiten; ca. 2,2 MB)
* ein ZIP-Archiv mit Beispielen und Übungsaufgaben (für JCreator LE) (643 KB)
|
rsem | cran | R | Package ‘rsem’
August 26, 2023
Type Package
Title Robust Structural Equation Modeling with Missing Data and
Auxiliary Variables
Version 0.5.1
Date 2023-08-26
Author <NAME> and <NAME>
Maintainer <NAME> <<EMAIL>>
Depends R (>= 2.7), MASS, lavaan
Description
A robust procedure is implemented to estimate means and covariance matrix of multiple vari-
ables with missing data using Huber weight and then to estimate a structural equation model.
License GPL-2
URL https://bigdatalab.nd.edu
ZipData no
LazyLoad yes
NeedsCompilation no
Repository CRAN
Date/Publication 2023-08-26 20:40:02 UTC
R topics documented:
rsem-packag... 2
mardiamv2... 3
rse... 3
rsem.Asco... 5
rsem.D... 6
rsem.emmusi... 7
rsem.fi... 8
rsem.gnam... 9
rsem.inde... 9
rsem.index... 10
rsem.indexv... 10
rsem.lavaa... 11
rsem.patter... 12
rsem.prin... 13
rsem.s... 15
rsem.ss... 16
rsem.switc... 16
rsem.switch.gamm... 17
rsem.ve... 17
rsem.vec... 18
rsem.weigh... 19
semdiag.combination... 19
semdiag.read.eq... 20
semdiag.run.eq... 21
rsem-package Robust Structural Equation Modeling with Missing Data and Auxiliary
Variables
Description
A robust procedure is implemented to estimate means and covariance matrix of multiple variables
with missing data using Huber weight and then to estimate a structural equation model.
Details
Package: rsem
Type: Package
License: GPL-2
LazyLoad: yes
Author(s)
<NAME> and <NAME> Maintainer: <NAME> <<EMAIL>>
References
<NAME>., & <NAME>. (2012). Robust structural equation modeling with missing data and
auxiliary variables. Psychometrika, 77(4), 803-826. https://doi.org/10.1007/s11336-012-9282-4
mardiamv25 Simulated data
Description
mardiamv25: Original data
mardiamv25_contaminated: Contaminated data with outliers
Usage
data(mardiamv25)
data(mardiamv25_contaminated)
rsem The main function for robust SEM analysis
Description
This is the function to carry out all analysis.
Usage
rsem(dset, select, EQSmodel, moment=TRUE, varphi=.1, st='i', max.it=1000,
eqsdata='data.txt', eqsweight='weight.txt', EQSpgm="C:/Progra~1/EQS61/WINEQS.EXE",
serial="1234")
Arguments
dset A data matrix or a data frame
select Variables to be seleted for SEM analysis. If omitted, all variables in the data set
will be used.
moment With mean structure. For covariance only, set moment=FALSE.
EQSmodel The input file for EQS. If omitted, only the first-stage analysis will be conducted.
varphi Proportion of data to be down-weighted. Default is 0.1.
max.it Maximum number of iterations for EM. Default is 1000
st Starting values for EM algorithm. The default is 0 for mean and I for covariance.
Alternative, the starting values can be estimated according to MCD.
eqsdata Data file name used in EQS
eqsweight File name for weight matrix
EQSpgm The path to the installed EQS program
serial The serial no of EQS
Details
This function will run the robust analysis and output results.
Value
If EQSmodel is not supplied
sem Information for SEM analysis including estimated means, covariance matrix and
their sandwich type covariance matrix in the order of mean first and then covari-
ance matrix.
misinfo Information related to missing data pattern
em Results from expectation robust algorithm
ascov Covariance matrix
If EQSmodel is supplied,
sem Information for SEM analysis including estimated means, covariance matrix and
their sandwich type covariance matrix according to the requirement of EQS.
In addition, the following model parameters are from EQS
fit.stat Fit indices and associated p-values
para Parameter estimates
eqs All information from REQS
Author(s)
<NAME> and <NAME>
References
<NAME> and <NAME> (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
See Also
rsem.pattern, rsem.emmusig, rsem.Ascov
Examples
## Not run:
## an example
## to use eqs, first load the package semdiag
library(semdiag)
data(mardiamv25)
analysis<-rsem(mardiamv25, c(1,2,4,5), 'eqsinput.eqs')
## End(Not run)
rsem.Ascov Sandwich-type covariance matrix
Description
Returns the sandwich type covariance matrix. This function is not intended to use seperately from
the rsem.emmusig function.
Usage
rsem.Ascov(xpattern, musig, varphi=.1)
Arguments
xpattern Missing data pattern output from rsem.pattern.
musig Robust mean and covariance matrix from rsem.emmusig
varphi Proportion of data to be down-weighted. Default is 0.1.
Details
Data should be a matrix. To change a data frame to a matrix, using data.matrix(x).
Value
Abeta A matrix
Bbeta B matrix
Gamma Sandwich type covariance matrix
Author(s)
<NAME> and <NAME>
References
<NAME> and <NAME> (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
See Also
rsem.emmusig
Examples
#dset<-read.table('MardiaMV25.dat.txt', na.string='-99')
#dset<-data.matrix(dset)
#n<-dim(dset)[1]
#p<-dim(dset)[2]
#miss_pattern<-rsem.pattern(n,p,dset)
#misinfo<-miss_pattern$misinfo
#V_forana<-c(1,2,4,5)
#em_results<-rsem.emmusig(dset,misinfo)
#hmu1<-em_results$mu
#hsigma1<-em_results$sigma
#rsem.Ascov(x, hmu1, hsigma1)
rsem.DP Generate a duplication matrix
Description
Generate a duplication matrix
Usage
rsem.DP(x)
Arguments
x A matrix
Author(s)
<NAME> and <NAME>
References
<NAME> and <NAME> (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
Examples
x<-array(1:6, c(2,3))
rsem.DP(x)
rsem.emmusig Robust mean and covariance matrix using Huber-type weight
Description
Robust mean and covariance matrix using Huber-type weight.
Usage
rsem.emmusig(xpattern, varphi=.1, max.it=1000, st='i')
Arguments
xpattern Missing data pattern output from rsem.pattern.
varphi Proportion of data to be down-weighted. Default is 0.1.
max.it Maximum number of iterations for EM. Default is 1000
st Starting values for EM algorithm. The default is 0 for mean and I for covariance.
Alternative, the starting values can be estimated according to MCD.
Details
Estimate mean and covariance matrix using the expectation robust (ER) algorithm.
Value
err Error code. 0: good. 1: maximum iterations are exceeded.
mu Mean vector
sigma Covariance matrix
Author(s)
<NAME> and <NAME>
References
Ke-Hai Yuan and Zhiyong Zhang (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
See Also
rsem.emmusig
Examples
#dset<-read.table('MardiaMV25.dat.txt', na.string='-99')
#dset<-data.matrix(dset)
#n<-dim(dset)[1]
#p<-dim(dset)[2]
#miss_pattern<-rsem.pattern(n,p,dset)
#misinfo<-miss_pattern$misinfo
#V_forana<-c(1,2,4,5)
#em_results<-rsem.emmusig(dset,misinfo)
#em_results
rsem.fit Calculate robust test statistics
Description
Calculate robust test statistics
Usage
rsem.fit(object, gamma, musig)
Arguments
object Output from lavaan analysis, such as growth, factor, sem functions.
gamma Robust covariance matrix for saturated mean and covariances
musig Robust saturated mean and covariances
Author(s)
<NAME> and <NAME>
References
Ke-Hai Yuan and Zhiyong Zhang (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
Examples
x<-array(1:6, c(2,3))
rsem.vec(x)
rsem.gname Internal function
Description
Internal function
Usage
rsem.gname(name)
Arguments
name Variable names.
Author(s)
<NAME> and <NAME>
References
Ke-Hai Yuan and Zhiyong Zhang (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
rsem.index rsem.index function
Description
To be added
Usage
rsem.index(p, oj)
Arguments
p number of variables
oj observed variables
rsem.indexv rsem.indexv function
Description
Internal function.
Usage
rsem.indexv(p, select)
Arguments
p number of variables
select variables to be used
rsem.indexvc rsem.indexvc function
Description
Internal function.
Usage
rsem.indexvc(p, select)
Arguments
p number of variables
select variables to be used
rsem.lavaan Conduct robust SEM analysis using lavaan
Description
Conduct robust SEM analysis using lavaan
Usage
rsem.lavaan(dset, model, select, varphi=.1, max.it=1000)
Arguments
dset A data matrix or a data frame
select Variables to be seleted for SEM analysis. If omitted, all variables in the data set
will be used.
model The model using lavaan syntax
varphi Proportion of data to be down-weighted. Default is 0.1.
max.it Maximum number of iterations for EM. Default is 1000
Details
This function will run the robust analysis and output results.
Author(s)
<NAME> and <NAME>
References
<NAME>., & <NAME>. (2012). Robust Structural Equation Modeling with Missing Data and
Auxiliary Variables. Psychometrika, 77(4), 803-826.
See Also
rsem.pattern, rsem.emmusig, rsem.Ascov
Examples
data(mardiamv25)
names(mardiamv25)<-paste('V', 1:5, sep='')
fa.model<-'f1 =~ V1 + V2
f2 =~ V4 + V5
f1 ~ 1
f2 ~ 1
V1 ~0*1
V2 ~0*1
V4 ~0*1
V5 ~0*1'
analysis<-rsem.lavaan(mardiamv25, fa.model, c(1,2,4,5))
rsem.pattern Obtaining missing data patterns
Description
This function obtains the missing data patterns and the number of cases in each patterns. It also
tells the number of observed variables and their indices for each pattern.
Usage
rsem.pattern(x, print=FALSE)
Arguments
x A matrix as data
print Whether to print the missing data pattern. The default is FALSE.
Details
The missing data pattern matrix has 2+p columns. The first column is the number cases in that
pattern. The second column is the number of observed variables. The last p columns are a matrix
with 1 denoting observed data and 0 denoting missing data.
Value
x Data ordered according to missing data pattern
misinfo Missing data pattern matrix
mispat Missing data pattern in better readable form.
Author(s)
<NAME> and <NAME>
References
<NAME> and <NAME> (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
Examples
#dset<-read.table('MardiaMV25.dat.txt', na.string='-99')
#dset<-data.matrix(dset)
#n<-dim(dset)[1]
#p<-dim(dset)[2]
#miss_pattern<-rsem.pattern(n,p,dset)
#miss_pattern
rsem.print Organize the output for Lavaan with robust s.e. and test statistics
Description
Organize the output for Lavaan with robust s.e. and test statistics. Modified from the print function
of Lavaan.
Usage
rsem.print(object, robust.se, robust.fit, estimates=TRUE, fit.measures=FALSE,
standardized=FALSE, rsquare=FALSE, std.nox=FALSE, modindices=FALSE)
Arguments
object Output from lavaan analysis, such as growth, factor, sem functions.
robust.se Robust standard error from the function rsem.se
robust.fit Robust fit statistics from the function rsem.fit
estimates Show parameter estimates
fit.measures Show fit statistics of lavaan (no need for it)
standardized standardized coefficients
rsquare R square for dependent variables.
std.nox to add
modindices Modification indices
Details
This function will run the robust analysis and output results.
Value
If EQSmodel is not supplied
sem Information for SEM analysis including estimated means, covariance matrix and
their sandwich type covariance matrix in the order of mean first and then covari-
ance matrix.
misinfo Information related to missing data pattern
em Results from expectation robust algorithm
ascov Covariance matrix
If EQSmodel is supplied,
sem Information for SEM analysis including estimated means, covariance matrix and
their sandwich type covariance matrix according to the requirement of EQS.
In addition, the following model parameters are from EQS
fit.stat Fit indices and associated p-values
para Parameter estimates
eqs All information from REQS
Author(s)
<NAME> and <NAME>
References
<NAME> and <NAME> (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
See Also
rsem.pattern, rsem.emmusig, rsem.Ascov
Examples
##\dontrun{
## an example
data(mardiamv25)
names(mardiamv25)<-paste('V', 1:5, sep='')
fa.model<-'f1 =~ V1 + V2
f2 =~ V4 + V5
f1 ~ 1
f2 ~ 1
V1 ~0*1
V2 ~0*1
V4 ~0*1
V5 ~0*1'
pat<-rsem.pattern(mardiamv25)
phi<-0.1
musig<-rsem.emmusig(pat, varphi=phi)
res.lavaan<-sem(fa.model, sample.cov=musig$sigma, sample.mean=musig$mu, sample.nobs=88,mimic='EQS')
ascov<-rsem.Ascov(pat, musig, varphi=phi)
robust.se<-rsem.se(res.lavaan, ascov$Gamma)
robust.fit <- rsem.fit(res.lavaan, ascov$Gamma, musig)
rsem.print(res.lavaan, robust.se, robust.fit)
## }
rsem.se Calculate robust standard errors
Description
Calculate robust standard errors
Usage
rsem.se(object, gamma)
Arguments
object Output from lavaan analysis, such as growth, factor, sem functions.
gamma Robust covariance matrix for saturated mean and covariances
Author(s)
<NAME> and <NAME>
References
Ke-Hai Yuan and Zhiyong Zhang (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
Examples
x<-array(1:6, c(2,3))
rsem.vec(x)
rsem.ssq Calculate the squared sum of a matrix
Description
Calculate the squared sum of a matrix
Usage
rsem.ssq(x)
Arguments
x A matrix
Author(s)
<NAME> and <NAME>
References
Ke-Hai Yuan and Zhiyong Zhang (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
Examples
x<-array(1:6, c(2,3))
rsem.ssq(x)
rsem.switch swith function
Description
swith function
Usage
rsem.switch(p)
Arguments
p number of variables
rsem.switch.gamma Internal function
Description
Internal function
Usage
rsem.switch.gamma(gamma, ov.names)
Arguments
gamma Robust covariance matrix for saturated mean and covariances
ov.names Observed variable names.
Author(s)
<NAME> and <NAME>
References
Ke-Hai Yuan and Zhiyong Zhang (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
rsem.vec Stacking a matrix to a vector
Description
Stacking a matrix to a vector
Usage
rsem.vec(x)
Arguments
x A matrix
Author(s)
<NAME> and <NAME>
References
Ke-Hai Yuan and Zhiyong Zhang (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
Examples
x<-array(1:6, c(2,3))
rsem.vec(x)
rsem.vech Stacking lower triange of a matrix to a vector
Description
Stacking lower triange of a matrix to a vector
Usage
rsem.vech(x)
Arguments
x A matrix
Author(s)
<NAME> and <NAME>
References
Ke-Hai Yuan and Zhiyong Zhang (2011) Robust Structural Equation Modeling with Missing Data
and Auxiliary Variables
Examples
x<-array(1:9, c(3,3))
rsem.vec(x)
rsem.weight Calculate weight for each subject
Description
Calculate weight for each subject in estimating the mean and covariance matrix.
Usage
rsem.weight(x, varphi, mu0, sig0)
Arguments
x Data
varphi Downweight rate.
mu0 Robust mean
sig0 Robust covariance matrix.
Value
w1 Weight for robust mean estimates
w2 Weight for robust covariance estimates
Author(s)
<NAME> and <NAME>
References
<NAME>., & <NAME>. (2012). Robust Structural Equation Modeling with Missing Data and
Auxiliary Variables. Psychometrika, 77(4), 803-826.
semdiag.combinations Enumerate the Combinations of the Elements of a Vector
Description
Enumerate the Combinations of the Elements of a Vector
Usage
semdiag.combinations(n, r)
Arguments
n Size of the source vector
r Size of the target vectors
semdiag.read.eqs Import of EQS outputs into R
Description
This function reads EQS output files (.ets, .CBK and .ETP) into R and stores the results as objects.
Usage
semdiag.read.eqs(file)
Arguments
file The name (string) of the .ets file or the full path which the data are to be read
from. If it does not contain an absolute path, the file name is relative to the
current working directory, ’getwd()’. A .CBK and .ETP file have to be of the
same name and in the same directory.
Details
The value list below provides objects for the full EQS output. If in EQS some objects are not
computed, the corresponding values in R are NA.
Value
Returns a list with the following objects:
model.info General model information
pval p-values for various test statistics
fit.indices Variuos fit indices
model.desc Descriptive measures
Phi Phi matrix
Gamma Gamma matrix
Beta Beta matrix
par.table Parameter table (with standard errors)
sample.cov Sample covariance matrix
sigma.hat Model covariance matrix
inv.infmat Inverse information matrix
rinv.infmat Robust inverse information matrix
cinv.infmat Corrected inverse information matrix
derivatives First derivatives
moment4 Matrix with 4th moments
ssolution Standardized elements
Rsquared R-squared measures
fac.means Factor means
var.desc Descriptive measures for the variables (univariate statistics)
indstd Independent variable standardization vector
depstd Dependent variable standardization vector
Author(s)
<NAME>, <NAME>
References
<NAME>. (2008). EQS Program Manual. Encino, CA: Multivariate Software Inc.
See Also
semdiag.call.eqs, semdiag.run.eqs
semdiag.run.eqs Run EQS from R
Description
Calls an EQS script file from R, executes EQS, and imports the results into R. Basically it is a
wrapper function of call.eqs and the subsequent read.eqs.
Usage
semdiag.run.eqs(EQSpgm, EQSmodel, serial, Rmatrix = NA, datname = NA, LEN = 2000000)
semdiag.call.eqs(EQSpgm, EQSmodel, serial, Rmatrix = NA, datname = NA, LEN = 2000000)
Arguments
EQSpgm String containing path where EQS is located (see details)
EQSmodel String containing path where .eqs script file is located (see details)
serial EQS serial number as integer value
Rmatrix Optional matrix argument if data or covariances are stored in R
datname If data is specified, a filename (string) must be provided for saving the data in
text format (blank separated; see details)
LEN Integer containing number of working array units. By default, it is 2000000 8
bytes units
Details
If the path in EQSpgm and EQSmodel contains a blank, single quotes and double quotes are re-
quired in argument. See EQSpgm argument in examples. The last statement in the EQSpgm argument
refers to the name of the executable program file. Under Windows it is ".../WINEQS" (referring to
WINEQS.exe), under Mac ".../MACEQS" and under Linux ".../EQS". When specifying the path,
use slash instead of backslash.
The .ETS, .CBK and .ETP files are written in the directory where the .eqs file is located. Note that
these 3 files must be in the same directory than the .eqs file.
The argument datname must match with the input data specified in the corresponding .eqs file. This
option can be used for simulations: Generate data in R, run.eqs() on with the corresponding data
argument, pick out the relevant return values.
The value list below provides objects for the full EQS output. If in EQS some objects are not
computed, the corresponding values in R are NA.
Value
Returns a list with the following objects:
success TRUE if estimation was successful, FALSE otherwise
model.info General model information
pval p-values for various test statistics
fit.indices Variuos fit indices
model.desc Descriptive measures
Phi Phi matrix
Gamma Gamma matrix
Beta Beta matrix
par.table Parameter table (with standard errors)
sample.cov Sample covariance matrix
sigma.hat Model covariance matrix
inv.infmat Inverse information matrix
rinv.infmat Robust inverse information matrix
cinv.infmat Corrected inverse information matrix
derivatives First derivatives
moment4 Matrix with 4th moments
ssolution Standardized elements
Rsquared R-squared measures
fac.means Factor means
var.desc Descriptive measures for the variables (univariate statistics)
indstd Independent variable standardization vector
depstd Dependent variable standardization vector
Author(s)
<NAME>, <NAME>
References
<NAME>. (1995). EQS Program Manual. Encino, CA: Multivariate Software Inc.
See Also
semdiag.read.eqs, semdiag.call.eqs |
spldv | cran | R | Package ‘spldv’
October 11, 2023
Type Package
Title Spatial Models for Limited Dependent Variables
Version 0.1.3
Date 2023-10-11
Description The current version of this package estimates spatial autoregressive models for binary de-
pendent variables using GMM estimators <doi:10.18637/jss.v107.i08>. It supports one-
step (Pinkse and Slade, 1998) <doi:10.1016/S0304-4076(97)00097-3> and two-step GMM esti-
mator along with the linearized GMM estimator pro-
posed by Klier and McMillen (2008) <doi:10.1198/073500107000000188>. It also allows for ei-
ther Probit or Logit model and compute the average marginal effects. All these models are pre-
sented in Sarrias and Piras (2023) <doi:10.1016/j.jocm.2023.100432>.
Encoding UTF-8
RoxygenNote 7.2.3
Depends R (>= 4.0)
Imports Formula, Matrix, maxLik, stats, sphet, memisc, car, methods,
numDeriv, MASS, spatialreg
Suggests spdep
License GPL (>= 2)
URL https://github.com/gpiras/spldv
BugReports https://github.com/gpiras/spldv/issues
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0001-5932-4817>),
<NAME> [aut] (<https://orcid.org/0000-0003-0225-6061>),
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-10-11 13:30:02 UTC
R topics documented:
getSummary.bingm... 2
getSummary.binlgm... 3
impact... 3
impacts.bingm... 4
sbinaryGM... 7
sbinaryLGM... 12
getSummary.bingmm Get Model Summaries for use with "mtable" for objects of class
bingmm
Description
A generic function to collect coefficients and summary statistics from a bingmm object. It is used in
mtable
Usage
## S3 method for class 'bingmm'
getSummary(obj, alpha = 0.05, ...)
Arguments
obj a bingmm object,
alpha level of the confidence intervals,
... further arguments,
Details
For more details see package memisc.
Value
A list with an array with coefficient estimates and a vector containing the model summary statistics.
getSummary.binlgmm Get Model Summaries for use with "mtable" for objects of class binl-
gmm
Description
A generic function to collect coefficients and summary statistics from a binlgmm object. It is used
in mtable
Usage
## S3 method for class 'binlgmm'
getSummary(obj, alpha = 0.05, ...)
Arguments
obj a binlgmm object,
alpha level of the confidence intervals,
... further arguments,
Details
For more details see package memisc.
Value
A list with an array with coefficient estimates and a vector containing the model summary statistics.
impacts Estimation of the average marginal effects for SARB models.
Description
Obtain the average marginal effects from bingmm or binlgmm class model.
Usage
impacts(object, ...)
Arguments
object an object of class bingmm or binlgmm
... Additional arguments to be passed.
Value
Estimates of the direct, indirect and total effect.
impacts.bingmm Estimation of the average marginal effects for SARB model estimated
using GMM procedures.
Description
Obtain the average marginal effects from bingmm or binlgmm class model.
Usage
## S3 method for class 'bingmm'
impacts(
object,
vcov = NULL,
vce = c("robust", "efficient", "ml"),
het = TRUE,
atmeans = FALSE,
type = c("mc", "delta"),
R = 100,
approximation = FALSE,
pw = 5,
tol = 1e-06,
empirical = FALSE,
...
)
## S3 method for class 'binlgmm'
impacts(
object,
vcov = NULL,
het = TRUE,
atmeans = FALSE,
type = c("mc", "delta"),
R = 100,
approximation = FALSE,
pw = 5,
tol = 1e-06,
empirical = FALSE,
...
)
## S3 method for class 'impacts.bingmm'
print(x, ...)
## S3 method for class 'impacts.bingmm'
summary(object, ...)
## S3 method for class 'summary.impacts.bingmm'
print(x, digits = max(3, getOption("digits") - 3), ...)
Arguments
object an object of class bingmm, binlgmm, or impacts.bingmm for summary and print
method.
vcov an estimate of the asymptotic variance-covariance matrix of the parameters for
a bingmm or binlgmm object.
vce string indicating what kind of variance-covariance matrix of the estimate should
be computed when using effect.bingmm. For the one-step GMM estimator, the
options are "robust" and "ml". For the two-step GMM estimator, the options
are "robust", "efficient" and "ml". The option "vce = ml" is an exploratory
method that evaluates the VC of the RIS estimator using the GMM estimates.
het logical. If TRUE (the default), then the heteroskedasticity is taken into account
when computing the average marginal effects.
atmeans logical. If FALSE (the default), then the average marginal effects are computed
at the unit level.
type string indicating which method is used to compute the standard errors of the
average marginal effects. If "mc", then the Monte Carlo approximation is used.
If "delta", then the Delta Method is used.
R numerical. Indicates the number of draws used in the Monte Carlo approxima-
tion if type = "mc".
approximation logical. If TRUE then (I −λW )−1 is approximated as I +λW +λ2 W 2 +λ3 W 3 +
... + λq W q . The default is FALSE.
pw numeric. The power used for the approximation I + λW + λ2 W 2 + λ3 W 3 +
... + λq W q . The default is 5.
tol Argument passed to mvrnorm: tolerance (relative to largest variance) for numer-
ical lack of positive-definiteness in the coefficient covariance matrix.
empirical logical. Argument passed to mvrnorm (default FALSE): if TRUE, the coefficients
and their covariance matrix specify the empirical not population mean and co-
variance matrix
... further arguments. Ignored.
x an object of class impacts.bingmm.
digits the number of digits.
Details
Let the model be:
y ∗ = Xβ + W Xγ + λW y ∗ + = Zδ + λW y ∗ +
where y = 1 if y ∗ > 0 and 0 otherwise; ∼ N (0, 1) if link = "probit" or ∼ L(0, π 2 /3) if link
= "logit".
The marginal effects respect to variable xr can be computed as
diag(f (a))Dλ−1 A−1
λ (In βr + W γr ) = Cr (θ)
where f () is the pdf, which depends on the assumption of the error terms; diag is the operator that
creates a n × n diagonal matrix; Aλ = (I − λW ); and Dλ is a diagonal matrix whose elements
represent the square root of the diagonal elements of the variance-covariance matrix of u = A−1
λ .
We implement these three summary measures: (1) The average total effects, AT Er = n−1 i0n Cr in ,
(2) The average direct effects, ADEr = n−1 tr(Cr ), and (3) the average indirect effects, AT Er −
ADEr .
The standard errors of the average total, direct and indirect effects can be estimated using either
Monte Carlo (MC) approximation, which takes into account the sampling distribution of θ, or Delta
Method.
Value
An object of class impacts.bingmm.
Author(s)
<NAME> and <NAME>.
See Also
sbinaryGMM, sbinaryLGMM.
Examples
# Data set
data(oldcol, package = "spdep")
# Create dependent (dummy) variable
COL.OLD$CRIMED <- as.numeric(COL.OLD$CRIME > 35)
# Two-step (Probit) GMM estimator
ts <- sbinaryGMM(CRIMED ~ INC + HOVAL| HOVAL,
link = "probit",
listw = spdep::nb2listw(COL.nb, style = "W"),
data = COL.OLD,
type = "twostep")
# Marginal effects using Delta Method
summary(impacts(ts, type = "delta"))
# Marginal effects using MC with 100 draws
summary(impacts(ts, type = "mc", R = 100))
# Marginal effects using efficient VC matrix
summary(impacts(ts, type = "delta", vce = "efficient"))
# Marginal effects using efficient VC matrix and ignoring the heteroskedasticity
summary(impacts(ts, type = "delta", vce = "efficient", het = FALSE))
sbinaryGMM Estimation of SAR for binary dependent models using GMM
Description
Estimation of SAR model for binary dependent variables (either Probit or Logit), using one- or
two-step GMM estimator. The type of model supported has the following structure:
y ∗ = Xβ + W Xγ + λW y ∗ + = Zδ + λW y ∗ +
where y = 1 if y ∗ > 0 and 0 otherwise; ∼ N (0, 1) if link = "probit" or ∼ L(0, π 2 /3) if link
= "logit".
Usage
sbinaryGMM(
formula,
data,
listw = NULL,
nins = 2,
link = c("probit", "logit"),
winitial = c("optimal", "identity"),
s.matrix = c("robust", "iid"),
type = c("onestep", "twostep"),
gradient = TRUE,
start = NULL,
cons.opt = FALSE,
approximation = FALSE,
verbose = TRUE,
print.init = FALSE,
pw = 5,
tol.solve = .Machine$double.eps,
...
)
## S3 method for class 'bingmm'
coef(object, ...)
## S3 method for class 'bingmm'
vcov(
object,
vce = c("robust", "efficient", "ml"),
method = "bhhh",
R = 1000,
tol.solve = .Machine$double.eps,
...
)
## S3 method for class 'bingmm'
print(x, digits = max(3, getOption("digits") - 3), ...)
## S3 method for class 'bingmm'
summary(
object,
vce = c("robust", "efficient", "ml"),
method = "bhhh",
R = 1000,
tol.solve = .Machine$double.eps,
...
)
## S3 method for class 'summary.bingmm'
print(x, digits = max(5, getOption("digits") - 3), ...)
Arguments
formula a symbolic description of the model of the form y ~ x | wx where y is the binary
dependent variable, x are the independent variables. The variables after | are
those variables that enter spatially lagged: W X. The variables in the second
part of formula must also appear in the first part. This rules out situations in
which one of the regressors can be specified only in lagged form.
data the data of class data.frame.
listw object. An object of class listw, matrix, or Matrix.
nins numerical. Order of instrumental-variable approximation; as default nins = 2,
such that H = (Z, W Z, W 2 Z) are used as instruments.
link string. The assumption of the distribution of the error term; it can be either link
= "probit" (the default) or link = "logit".
winitial string. A string indicating the initial moment-weighting matrix Ψ; it can be
either winitial = "optimal" (the default) or winitial = "identity".
s.matrix string. Only valid of type = "twostep" is used. This is a string indicating the
type of variance-covariance matrix Ŝ to be used in the second-step procedure; it
can be s.matrix = "robust" (the default) or s.matrix = "iid".
type string. A string indicating whether the one-step (type = "onestep"), or two-
step GMM (type = "twostep") should be computed.
gradient logical. Only for testing procedures. Should the analytic gradient be used in the
GMM optimization procedure? TRUE as default. If FALSE, then the numerical
gradient is used.
start if not NULL, the user must provide a vector of initial parameters for the optimiza-
tion procedure. When start = NULL, sbinaryGMM uses the traditional Probit or
Logit estimates as initial values for the parameters, and the correlation between
y and W y as initial value for λ.
cons.opt logical. Should a constrained optimization procedure for λ be used? FALSE as
default.
approximation logical. If TRUE then (I −λW )−1 is approximated as I +λW +λ2 W 2 +λ3 W 3 +
... + λq W q . The default is FALSE.
verbose logical. If TRUE, the code reports messages and some values during optimization.
print.init logical. If TRUE the initial parameters used in the optimization of the first step
are printed.
pw numeric. The power used for the approximation I + λW + λ2 W 2 + λ3 W 3 +
... + λq W q . The default is 5.
tol.solve Tolerance for solve().
... additional arguments passed to maxLik.
vce string. A string indicating what kind of standard errors should be computed
when using summary. For the one-step GMM estimator, the options are "robust"
and "ml". For the two-step GMM estimator, the options are "robust", "efficient"
and "ml". The option "vce = ml" is an exploratory method that evaluates the VC
of the RIS estimator using the GMM estimates.
method string. Only valid if vce = "ml". It indicates the algorithm used to compute the
Hessian matrix of the RIS estimator. The defult is "bhhh".
R numeric. Only valid if vce = "ml". It indicates the number of draws used to
compute the simulated probability in the RIS estimator.
x, object, an object of class bingmm
digits the number of digits
Details
The data generating process is:
y ∗ = Xβ + W Xγ + λW y ∗ + = Zδ + λW y ∗ +
where y = 1 if y ∗ > 0 and 0 otherwise; ∼ N (0, 1) if link = "probit" or ∼ L(0, π 2 /3) if link
= "logit".. The general GMM estimator minimizes
J(θ) = g 0 (θ)Ψ̂g(θ)
where θ = (β, γ, λ) and
where v is the generalized residuals. Let Z = (X, W X), then the instrument matrix H contains
the linearly independent columns of H = (Z, W Z, ..., W q Z). The one-step GMM estimator min-
imizes J(θ) setting either Ψ̂ = Ip if winitial = "identity" or Ψ̂ = (H 0 H/n)−1 if winitial =
"optimal". The two-step GMM estimator uses an additional step to achieve higher efficiency by
computing the variance-covariance matrix of the moments Ŝ to weight the sample moments. This
matrix is computed using the residuals or generalized
Pn residuals from the first-step, which are con-
sistent. This matrix is computed as Ŝ = n−1 i=1 hi (f 2 /(F (1 − F )))h0i if s.matrix = "robust"
Pn
or Ŝ = n−1 i=1 v̂i hi h0i , where v̂ are the first-step generalized residuals.
Value
An object of class “bingmm”, a list with elements:
coefficients the estimated coefficients,
call the matched call,
callF the full matched call,
X the X matrix, which contains also WX if the second part of the formula is used,
H the H matrix of instruments used,
y the dependent variable,
listw the spatial weight matrix,
link the string indicating the distribution of the error term,
Psi the moment-weighting matrix used in the last round,
type type of model that was fitted,
s.matrix the type of S matrix used in the second round,
winitial the moment-weighting matrix used for the first step procedure
opt object of class maxLik,
approximation a logical value indicating whether approximation was used to compute the in-
verse matrix,
pw the powers for the approximation,
formula the formula.
Author(s)
<NAME> and <NAME>.
References
<NAME>., & <NAME>. (1998). Contracting in space: An application of spatial statistics to
discrete-choice models. Journal of Econometrics, 85(1), 125-154.
<NAME>. (2004). Techniques for estimating spatially dependent discrete choice models. In
Advances in spatial econometrics (pp. 145-168). Springer, Berlin, Heidelberg.
<NAME>., & <NAME>. (2008). Clustering of auto supplier plants in the United States: general-
ized method of moments spatial logit for large samples. Journal of Business & Economic Statistics,
26(4), 460-471.
<NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2011). New Orleans business
recovery in the aftermath of Hurricane Katrina. Journal of the Royal Statistical Society: Series A
(Statistics in Society), 174(4), 1007-1027.
<NAME>., & <NAME>. (2023). One or Two-Step? Evaluating GMM Efficiency for Spatial Binary
Probit Models. Journal of choice modelling, 48, 100432.
<NAME>,. & <NAME>. (2023). GMM Estimators for Binary Spatial Models in R. Journal of
Statistical Software, 107(8), 1-33.
See Also
sbinaryLGMM, impacts.bingmm.
Examples
# Data set
data(oldcol, package = "spdep")
# Create dependent (dummy) variable
COL.OLD$CRIMED <- as.numeric(COL.OLD$CRIME > 35)
# Two-step (Probit) GMM estimator
ts <- sbinaryGMM(CRIMED ~ INC + HOVAL,
link = "probit",
listw = spdep::nb2listw(COL.nb, style = "W"),
data = COL.OLD,
type = "twostep",
verbose = TRUE)
# Robust standard errors
summary(ts)
# Efficient standard errors
summary(ts, vce = "efficient")
# One-step (Probit) GMM estimator
os <- sbinaryGMM(CRIMED ~ INC + HOVAL,
link = "probit",
listw = spdep::nb2listw(COL.nb, style = "W"),
data = COL.OLD,
type = "onestep",
verbose = TRUE)
summary(os)
# One-step (Logit) GMM estimator with identity matrix as initial weight matrix
os_l <- sbinaryGMM(CRIMED ~ INC + HOVAL,
link = "logit",
listw = spdep::nb2listw(COL.nb, style = "W"),
data = COL.OLD,
type = "onestep",
winitial = "identity",
verbose = TRUE)
summary(os_l)
# Two-step (Probit) GMM estimator with WX
ts_wx <- sbinaryGMM(CRIMED ~ INC + HOVAL| INC + HOVAL,
link = "probit",
listw = spdep::nb2listw(COL.nb, style = "W"),
data = COL.OLD,
type = "twostep",
verbose = FALSE)
summary(ts_wx)
# Constrained two-step (Probit) GMM estimator
ts_c <- sbinaryGMM(CRIMED ~ INC + HOVAL,
link = "probit",
listw = spdep::nb2listw(COL.nb, style = "W"),
data = COL.OLD,
type = "twostep",
verbose = TRUE,
cons.opt = TRUE)
summary(ts_c)
sbinaryLGMM Estimation of SAR for binary models using Linearized GMM.
Description
Estimation of SAR model for binary dependent variables (either Probit or Logit), using Linearized
GMM estimator suggested by Klier and McMillen (2008). The model is:
y ∗ = Xβ + W Xγ + λW y ∗ + = Zδ + λW y ∗ +
where y = 1 if y ∗ > 0 and 0 otherwise; ∼ N (0, 1) if link = "probit" or ∼ L(0, π 2 /3) link =
"logit".
Usage
sbinaryLGMM(
formula,
data,
listw = NULL,
nins = 2,
link = c("logit", "probit"),
...
)
## S3 method for class 'binlgmm'
coef(object, ...)
## S3 method for class 'binlgmm'
vcov(object, ...)
## S3 method for class 'binlgmm'
print(x, digits = max(3, getOption("digits") - 3), ...)
## S3 method for class 'binlgmm'
summary(object, ...)
## S3 method for class 'summary.binlgmm'
print(x, digits = max(3, getOption("digits") - 2), ...)
Arguments
formula a symbolic description of the model of the form y ~ x | wx where y is the binary
dependent variable, x are the independent variables. The variables after | are
those variables that enter spatially lagged: W X. The variables in the second
part of formula must also appear in the first part.
data the data of class data.frame.
listw object. An object of class listw, matrix, or Matrix.
nins numerical. Order of instrumental-variable approximation; as default nins = 2,
such that H = (Z, W Z, W 2 Z) are used as instruments.
link string. The assumption of the distribution of the error term; it can be either link
= "probit" (the default) or link = "logit".
... additional arguments.
x, object, an object of class binlgmm.
digits the number of digits
Details
The steps for the linearized spatial Probit/Logit model are the following:
1. Estimate the model by standard Probit/Logit model, in which spatial autocorrelation and het-
eroskedasticity are ignored. The estimated values are β0 . Calculate the generalized residuals as-
suming that λ = 0 and the gradient terms Gβ and Gλ .
2. The second step is a two-stage least squares estimator of the linearized model. Thus regress
Gβ and Gλ on H = (Z, W Z, W 2 Z, ...., W q Z) and obtain the predicted values Ĝ. Then regress
u0 + G0β β̂0 on Ĝ. The coefficients are the estimated values of β and λ.
The variance-covariance matrix can be computed using the traditional White-corrected coefficient
covariance matrix from the last two-stage least squares estimator of the linearlized model.
Value
An object of class “bingmm”, a list with elements:
coefficients the estimated coefficients,
call the matched call,
X the X matrix, which contains also WX if the second part of the formula is used,
H the H matrix of instruments used,
y the dependent variable,
listw the spatial weight matrix,
link the string indicating the distribution of the error term,
fit an object of lm representing the T2SLS,
formula the formula.
Author(s)
<NAME> and <NAME>.
References
<NAME>., & <NAME>. (2008). Clustering of auto supplier plants in the United States: general-
ized method of moments spatial logit for large samples. Journal of Business & Economic Statistics,
26(4), 460-471.
<NAME>., & <NAME>. (2023). One or Two-Step? Evaluating GMM Efficiency for Spatial Binary
Probit Models. Journal of choice modelling, 48, 100432.
<NAME>,. & <NAME>. (2023). GMM Estimators for Binary Spatial Models in R. Journal of
Statistical Software, 107(8), 1-33.
See Also
sbinaryGMM, impacts.bingmm.
Examples
# Data set
data(oldcol, package = "spdep")
# Create dependent (dummy) variable
COL.OLD$CRIMED <- as.numeric(COL.OLD$CRIME > 35)
# LGMM for probit using q = 3 for instruments
lgmm <- sbinaryLGMM(CRIMED ~ INC + HOVAL | INC,
link = "probit",
listw = spdep::nb2listw(COL.nb, style = "W"),
nins = 3,
data = COL.OLD)
summary(lgmm) |
Cheers | cocoapods | Objective-C | cheers
===
Cheers is a delightful and versatile WordPress theme designed for individuals and businesses alike. With its elegant and modern design, Cheers is perfect for showcasing your portfolio, blog, or any other showcase-worthy content.
Get Started
To get started with Cheers, follow these simple steps:
1. Installation
To install Cheers, please follow the steps below:
a. Log in to your WordPress admin panel.
b. Go to Appearance » Themes.
c. Click on the “Add New” button.
d. Search for “Cheers” in the search bar.
e. Once you find the Cheers theme, click on the “Install” button.
f. After installation, click on the “Activate” button to activate Cheers on your website.
2. Theme Customization
Cheers provides extensive customization options to help you personalize your website. To access the theme customization panel, follow these steps:
a. Log in to your WordPress admin panel.
b. Go to Appearance » Customize.
In the customization panel, you will find various options to customize the theme, including:
a. Site Identity: customize your site title, tagline, and logo.
b. Colors & Background: modify the color scheme and background of your website.
c. Header Options: customize the header settings, including header layout and navigation menu.
d. Typography: change the typography settings, including font sizes, styles, and more.
e. Footer Options: customize the footer layout and content.
f. Additional CSS: add your own custom CSS code for advanced customization.
Once you are happy with your customizations, click on the “Publish” button to save your changes.
3. Creating Pages
Cheers allows you to create stunning pages with ease. To create a new page, follow these steps:
a. Log in to your WordPress admin panel.
b. Go to Pages » Add New.
c. Enter a title for your page in the “Add Title” field.
d. Add your desired content using the Gutenberg editor blocks.
e. Customize the page settings and publish your page.
Features
Cheers offers a wide range of features to help you create a stunning website. Here are some of the key features of the Cheers theme:
1. Responsive Design
Cheers is fully responsive, ensuring your website looks great on all devices and screen sizes.
2. Customizable Homepage
Customize your homepage with various sections, including featured content, testimonials, and more, to showcase your best work.
3. Portfolio Showcase
With Cheers, you can create a beautiful portfolio to highlight your work and attract potential clients or employers.
4. Blogging Made Easy
Start a blog or enhance your existing one with Cheers’ clean and stylish blog layouts. Share your thoughts and engage with your audience effortlessly.
5. Contact Form Integration
Cheers comes with seamless integration for popular contact form plugins like Contact Form 7 and Ninja Forms, making it easy for visitors to get in touch with you.
Support & Documentation
We provide top-notch support and detailed documentation to assist you in using Cheers effectively. If you encounter any issues or need assistance, please visit our support page at [support.example.com].
Our documentation is available online and covers various aspects of using the Cheers theme. It includes step-by-step instructions, video tutorials, and troubleshooting guides to make your experience smooth and enjoyable.
We continuously update our documentation to ensure it remains comprehensive and up to date.
With Cheers, you can build a visually stunning and professional website that leaves a lasting impression. Get started now and elevate your online presence! |
Subsets and Splits